00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3662 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3264 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.105 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.153 Fetching changes from the remote Git repository 00:00:00.155 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.219 > git --version # 'git version 2.39.2' 00:00:00.219 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.238 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.238 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.650 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.660 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.671 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.671 > git config core.sparsecheckout # timeout=10 00:00:04.681 > git read-tree -mu HEAD # timeout=10 00:00:04.701 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.721 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.721 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.822 [Pipeline] Start of Pipeline 00:00:04.836 [Pipeline] library 00:00:04.838 Loading library shm_lib@master 00:00:04.838 Library shm_lib@master is cached. Copying from home. 00:00:04.854 [Pipeline] node 00:00:04.862 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.863 [Pipeline] { 00:00:04.872 [Pipeline] catchError 00:00:04.873 [Pipeline] { 00:00:04.884 [Pipeline] wrap 00:00:04.891 [Pipeline] { 00:00:04.897 [Pipeline] stage 00:00:04.898 [Pipeline] { (Prologue) 00:00:05.068 [Pipeline] sh 00:00:05.350 + logger -p user.info -t JENKINS-CI 00:00:05.367 [Pipeline] echo 00:00:05.369 Node: GP11 00:00:05.375 [Pipeline] sh 00:00:05.664 [Pipeline] setCustomBuildProperty 00:00:05.675 [Pipeline] echo 00:00:05.676 Cleanup processes 00:00:05.682 [Pipeline] sh 00:00:05.958 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.958 1277608 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.969 [Pipeline] sh 00:00:06.242 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.242 ++ awk '{print $1}' 00:00:06.242 ++ grep -v 'sudo pgrep' 00:00:06.242 + sudo kill -9 00:00:06.242 + true 00:00:06.252 [Pipeline] cleanWs 00:00:06.260 [WS-CLEANUP] Deleting project workspace... 00:00:06.260 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.265 [WS-CLEANUP] done 00:00:06.269 [Pipeline] setCustomBuildProperty 00:00:06.280 [Pipeline] sh 00:00:06.552 + sudo git config --global --replace-all safe.directory '*' 00:00:06.633 [Pipeline] httpRequest 00:00:06.649 [Pipeline] echo 00:00:06.650 Sorcerer 10.211.164.101 is alive 00:00:06.657 [Pipeline] httpRequest 00:00:06.660 HttpMethod: GET 00:00:06.661 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.661 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.662 Response Code: HTTP/1.1 200 OK 00:00:06.662 Success: Status code 200 is in the accepted range: 200,404 00:00:06.663 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.876 [Pipeline] sh 00:00:08.188 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.204 [Pipeline] httpRequest 00:00:08.232 [Pipeline] echo 00:00:08.234 Sorcerer 10.211.164.101 is alive 00:00:08.241 [Pipeline] httpRequest 00:00:08.245 HttpMethod: GET 00:00:08.245 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:08.246 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:08.249 Response Code: HTTP/1.1 200 OK 00:00:08.249 Success: Status code 200 is in the accepted range: 200,404 00:00:08.249 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:30.657 [Pipeline] sh 00:00:30.936 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:33.494 [Pipeline] sh 00:00:33.769 + git -C spdk log --oneline -n5 00:00:33.769 719d03c6a sock/uring: only register net impl if supported 00:00:33.769 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:33.769 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:33.769 6c7c1f57e accel: add sequence outstanding stat 00:00:33.769 3bc8e6a26 accel: add utility to put task 00:00:33.788 [Pipeline] withCredentials 00:00:33.799 > git --version # timeout=10 00:00:33.810 > git --version # 'git version 2.39.2' 00:00:33.826 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:33.829 [Pipeline] { 00:00:33.837 [Pipeline] retry 00:00:33.839 [Pipeline] { 00:00:33.855 [Pipeline] sh 00:00:34.136 + git ls-remote http://dpdk.org/git/dpdk main 00:00:35.091 [Pipeline] } 00:00:35.114 [Pipeline] // retry 00:00:35.119 [Pipeline] } 00:00:35.141 [Pipeline] // withCredentials 00:00:35.152 [Pipeline] httpRequest 00:00:35.181 [Pipeline] echo 00:00:35.183 Sorcerer 10.211.164.101 is alive 00:00:35.191 [Pipeline] httpRequest 00:00:35.195 HttpMethod: GET 00:00:35.196 URL: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:35.196 Sending request to url: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:35.206 Response Code: HTTP/1.1 200 OK 00:00:35.207 Success: Status code 200 is in the accepted range: 200,404 00:00:35.207 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:01:13.227 [Pipeline] sh 00:01:13.512 + tar --no-same-owner -xf dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:01:15.433 [Pipeline] sh 00:01:15.727 + git -C dpdk log --oneline -n5 00:01:15.727 fa8d2f7f28 version: 24.07-rc2 00:01:15.727 d4bc3c2e01 maintainers: update for cxgbe driver 00:01:15.727 2227c0ed9a maintainers: update for Microsoft drivers 00:01:15.727 8385370337 maintainers: update for Arm 00:01:15.727 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:01:15.736 [Pipeline] } 00:01:15.752 [Pipeline] // stage 00:01:15.760 [Pipeline] stage 00:01:15.762 [Pipeline] { (Prepare) 00:01:15.783 [Pipeline] writeFile 00:01:15.801 [Pipeline] sh 00:01:16.083 + logger -p user.info -t JENKINS-CI 00:01:16.096 [Pipeline] sh 00:01:16.376 + logger -p user.info -t JENKINS-CI 00:01:16.390 [Pipeline] sh 00:01:16.671 + cat autorun-spdk.conf 00:01:16.671 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.671 SPDK_TEST_NVMF=1 00:01:16.671 SPDK_TEST_NVME_CLI=1 00:01:16.671 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.671 SPDK_TEST_NVMF_NICS=e810 00:01:16.671 SPDK_TEST_VFIOUSER=1 00:01:16.671 SPDK_RUN_UBSAN=1 00:01:16.671 NET_TYPE=phy 00:01:16.672 SPDK_TEST_NATIVE_DPDK=main 00:01:16.672 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:16.677 RUN_NIGHTLY=1 00:01:16.684 [Pipeline] readFile 00:01:16.717 [Pipeline] withEnv 00:01:16.719 [Pipeline] { 00:01:16.734 [Pipeline] sh 00:01:17.009 + set -ex 00:01:17.009 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:17.009 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.009 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.009 ++ SPDK_TEST_NVMF=1 00:01:17.009 ++ SPDK_TEST_NVME_CLI=1 00:01:17.009 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.009 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.009 ++ SPDK_TEST_VFIOUSER=1 00:01:17.009 ++ SPDK_RUN_UBSAN=1 00:01:17.009 ++ NET_TYPE=phy 00:01:17.009 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:17.009 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:17.009 ++ RUN_NIGHTLY=1 00:01:17.009 + case $SPDK_TEST_NVMF_NICS in 00:01:17.009 + DRIVERS=ice 00:01:17.009 + [[ tcp == \r\d\m\a ]] 00:01:17.009 + [[ -n ice ]] 00:01:17.009 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:17.009 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:17.009 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:17.009 rmmod: ERROR: Module irdma is not currently loaded 00:01:17.009 rmmod: ERROR: Module i40iw is not currently loaded 00:01:17.009 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:17.009 + true 00:01:17.009 + for D in $DRIVERS 00:01:17.009 + sudo modprobe ice 00:01:17.009 + exit 0 00:01:17.017 [Pipeline] } 00:01:17.035 [Pipeline] // withEnv 00:01:17.040 [Pipeline] } 00:01:17.056 [Pipeline] // stage 00:01:17.066 [Pipeline] catchError 00:01:17.068 [Pipeline] { 00:01:17.080 [Pipeline] timeout 00:01:17.080 Timeout set to expire in 50 min 00:01:17.081 [Pipeline] { 00:01:17.093 [Pipeline] stage 00:01:17.096 [Pipeline] { (Tests) 00:01:17.110 [Pipeline] sh 00:01:17.390 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.390 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.390 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.390 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:17.390 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.390 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.390 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:17.390 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.390 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.390 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.390 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:17.390 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.390 + source /etc/os-release 00:01:17.390 ++ NAME='Fedora Linux' 00:01:17.390 ++ VERSION='38 (Cloud Edition)' 00:01:17.390 ++ ID=fedora 00:01:17.390 ++ VERSION_ID=38 00:01:17.390 ++ VERSION_CODENAME= 00:01:17.390 ++ PLATFORM_ID=platform:f38 00:01:17.390 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:17.390 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.390 ++ LOGO=fedora-logo-icon 00:01:17.390 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:17.390 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.390 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:17.390 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.390 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.390 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.390 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:17.390 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.390 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:17.390 ++ SUPPORT_END=2024-05-14 00:01:17.390 ++ VARIANT='Cloud Edition' 00:01:17.390 ++ VARIANT_ID=cloud 00:01:17.390 + uname -a 00:01:17.390 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:17.390 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:18.323 Hugepages 00:01:18.323 node hugesize free / total 00:01:18.323 node0 1048576kB 0 / 0 00:01:18.323 node0 2048kB 0 / 0 00:01:18.323 node1 1048576kB 0 / 0 00:01:18.323 node1 2048kB 0 / 0 00:01:18.324 00:01:18.324 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.324 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:18.324 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:18.324 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:18.324 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:18.324 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:18.324 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:18.324 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:18.324 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:18.324 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:18.324 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:18.324 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:18.324 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:18.324 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:18.324 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:18.324 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:18.324 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:18.324 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:18.324 + rm -f /tmp/spdk-ld-path 00:01:18.324 + source autorun-spdk.conf 00:01:18.324 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.324 ++ SPDK_TEST_NVMF=1 00:01:18.324 ++ SPDK_TEST_NVME_CLI=1 00:01:18.324 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.324 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.324 ++ SPDK_TEST_VFIOUSER=1 00:01:18.324 ++ SPDK_RUN_UBSAN=1 00:01:18.324 ++ NET_TYPE=phy 00:01:18.324 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:18.324 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.324 ++ RUN_NIGHTLY=1 00:01:18.324 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.324 + [[ -n '' ]] 00:01:18.324 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.582 + for M in /var/spdk/build-*-manifest.txt 00:01:18.582 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.582 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.582 + for M in /var/spdk/build-*-manifest.txt 00:01:18.582 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.582 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.582 ++ uname 00:01:18.582 + [[ Linux == \L\i\n\u\x ]] 00:01:18.582 + sudo dmesg -T 00:01:18.582 + sudo dmesg --clear 00:01:18.582 + dmesg_pid=1278948 00:01:18.582 + [[ Fedora Linux == FreeBSD ]] 00:01:18.582 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.582 + sudo dmesg -Tw 00:01:18.582 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.582 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.582 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.582 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.582 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.582 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.582 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.582 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.582 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.582 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.582 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.582 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.582 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.582 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.582 Test configuration: 00:01:18.582 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.582 SPDK_TEST_NVMF=1 00:01:18.582 SPDK_TEST_NVME_CLI=1 00:01:18.582 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.582 SPDK_TEST_NVMF_NICS=e810 00:01:18.582 SPDK_TEST_VFIOUSER=1 00:01:18.582 SPDK_RUN_UBSAN=1 00:01:18.582 NET_TYPE=phy 00:01:18.582 SPDK_TEST_NATIVE_DPDK=main 00:01:18.582 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.582 RUN_NIGHTLY=1 06:48:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:18.582 06:48:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.582 06:48:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.582 06:48:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.582 06:48:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.582 06:48:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.582 06:48:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.582 06:48:47 -- paths/export.sh@5 -- $ export PATH 00:01:18.583 06:48:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.583 06:48:47 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:18.583 06:48:47 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:18.583 06:48:47 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720846127.XXXXXX 00:01:18.583 06:48:47 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720846127.fkg55r 00:01:18.583 06:48:47 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:18.583 06:48:47 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:01:18.583 06:48:47 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.583 06:48:47 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:18.583 06:48:47 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:18.583 06:48:47 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.583 06:48:47 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:18.583 06:48:47 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:18.583 06:48:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.583 06:48:47 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:18.583 06:48:47 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:18.583 06:48:47 -- pm/common@17 -- $ local monitor 00:01:18.583 06:48:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.583 06:48:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.583 06:48:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.583 06:48:47 -- pm/common@21 -- $ date +%s 00:01:18.583 06:48:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.583 06:48:47 -- pm/common@21 -- $ date +%s 00:01:18.583 06:48:47 -- pm/common@25 -- $ sleep 1 00:01:18.583 06:48:47 -- pm/common@21 -- $ date +%s 00:01:18.583 06:48:47 -- pm/common@21 -- $ date +%s 00:01:18.583 06:48:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720846127 00:01:18.583 06:48:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720846127 00:01:18.583 06:48:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720846127 00:01:18.583 06:48:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720846127 00:01:18.583 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720846127_collect-vmstat.pm.log 00:01:18.583 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720846127_collect-cpu-load.pm.log 00:01:18.583 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720846127_collect-cpu-temp.pm.log 00:01:18.583 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720846127_collect-bmc-pm.bmc.pm.log 00:01:19.516 06:48:48 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:19.516 06:48:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.516 06:48:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.516 06:48:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.516 06:48:48 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.516 Sat Jul 13 04:48:48 AM UTC 2024 00:01:19.516 06:48:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.516 v24.09-pre-202-g719d03c6a 00:01:19.516 06:48:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:19.516 06:48:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.516 06:48:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.516 06:48:48 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:19.516 06:48:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:19.516 06:48:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.516 ************************************ 00:01:19.516 START TEST ubsan 00:01:19.516 ************************************ 00:01:19.516 06:48:48 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:19.516 using ubsan 00:01:19.516 00:01:19.516 real 0m0.000s 00:01:19.516 user 0m0.000s 00:01:19.516 sys 0m0.000s 00:01:19.516 06:48:48 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:19.516 06:48:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.516 ************************************ 00:01:19.516 END TEST ubsan 00:01:19.516 ************************************ 00:01:19.774 06:48:48 -- common/autotest_common.sh@1142 -- $ return 0 00:01:19.774 06:48:48 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:19.774 06:48:48 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:19.774 06:48:48 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:19.774 06:48:48 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:19.774 06:48:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:19.774 06:48:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.774 ************************************ 00:01:19.774 START TEST build_native_dpdk 00:01:19.774 ************************************ 00:01:19.774 06:48:49 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:19.774 fa8d2f7f28 version: 24.07-rc2 00:01:19.774 d4bc3c2e01 maintainers: update for cxgbe driver 00:01:19.774 2227c0ed9a maintainers: update for Microsoft drivers 00:01:19.774 8385370337 maintainers: update for Arm 00:01:19.774 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:19.774 06:48:49 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:19.775 06:48:49 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:19.775 patching file config/rte_config.h 00:01:19.775 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:19.775 06:48:49 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:24.012 The Meson build system 00:01:24.012 Version: 1.3.1 00:01:24.012 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.012 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:24.012 Build type: native build 00:01:24.012 Program cat found: YES (/usr/bin/cat) 00:01:24.012 Project name: DPDK 00:01:24.012 Project version: 24.07.0-rc2 00:01:24.012 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:24.012 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:24.012 Host machine cpu family: x86_64 00:01:24.012 Host machine cpu: x86_64 00:01:24.012 Message: ## Building in Developer Mode ## 00:01:24.012 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:24.012 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:24.012 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:24.012 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:24.012 Program cat found: YES (/usr/bin/cat) 00:01:24.012 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:24.012 Compiler for C supports arguments -march=native: YES 00:01:24.012 Checking for size of "void *" : 8 00:01:24.012 Checking for size of "void *" : 8 (cached) 00:01:24.012 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:24.012 Library m found: YES 00:01:24.012 Library numa found: YES 00:01:24.012 Has header "numaif.h" : YES 00:01:24.012 Library fdt found: NO 00:01:24.012 Library execinfo found: NO 00:01:24.012 Has header "execinfo.h" : YES 00:01:24.012 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:24.012 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:24.012 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:24.012 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:24.012 Run-time dependency openssl found: YES 3.0.9 00:01:24.012 Run-time dependency libpcap found: YES 1.10.4 00:01:24.012 Has header "pcap.h" with dependency libpcap: YES 00:01:24.012 Compiler for C supports arguments -Wcast-qual: YES 00:01:24.012 Compiler for C supports arguments -Wdeprecated: YES 00:01:24.012 Compiler for C supports arguments -Wformat: YES 00:01:24.012 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:24.012 Compiler for C supports arguments -Wformat-security: NO 00:01:24.012 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.012 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:24.012 Compiler for C supports arguments -Wnested-externs: YES 00:01:24.012 Compiler for C supports arguments -Wold-style-definition: YES 00:01:24.012 Compiler for C supports arguments -Wpointer-arith: YES 00:01:24.012 Compiler for C supports arguments -Wsign-compare: YES 00:01:24.012 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:24.012 Compiler for C supports arguments -Wundef: YES 00:01:24.012 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.012 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:24.012 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:24.012 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.012 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:24.012 Program objdump found: YES (/usr/bin/objdump) 00:01:24.012 Compiler for C supports arguments -mavx512f: YES 00:01:24.012 Checking if "AVX512 checking" compiles: YES 00:01:24.012 Fetching value of define "__SSE4_2__" : 1 00:01:24.012 Fetching value of define "__AES__" : 1 00:01:24.012 Fetching value of define "__AVX__" : 1 00:01:24.012 Fetching value of define "__AVX2__" : (undefined) 00:01:24.012 Fetching value of define "__AVX512BW__" : (undefined) 00:01:24.012 Fetching value of define "__AVX512CD__" : (undefined) 00:01:24.012 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:24.012 Fetching value of define "__AVX512F__" : (undefined) 00:01:24.012 Fetching value of define "__AVX512VL__" : (undefined) 00:01:24.012 Fetching value of define "__PCLMUL__" : 1 00:01:24.012 Fetching value of define "__RDRND__" : 1 00:01:24.012 Fetching value of define "__RDSEED__" : (undefined) 00:01:24.012 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:24.013 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:24.013 Message: lib/log: Defining dependency "log" 00:01:24.013 Message: lib/kvargs: Defining dependency "kvargs" 00:01:24.013 Message: lib/argparse: Defining dependency "argparse" 00:01:24.013 Message: lib/telemetry: Defining dependency "telemetry" 00:01:24.013 Checking for function "getentropy" : NO 00:01:24.013 Message: lib/eal: Defining dependency "eal" 00:01:24.013 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:24.013 Message: lib/ring: Defining dependency "ring" 00:01:24.013 Message: lib/rcu: Defining dependency "rcu" 00:01:24.013 Message: lib/mempool: Defining dependency "mempool" 00:01:24.013 Message: lib/mbuf: Defining dependency "mbuf" 00:01:24.013 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:24.013 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:24.013 Compiler for C supports arguments -mpclmul: YES 00:01:24.013 Compiler for C supports arguments -maes: YES 00:01:24.013 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:24.013 Compiler for C supports arguments -mavx512bw: YES 00:01:24.013 Compiler for C supports arguments -mavx512dq: YES 00:01:24.013 Compiler for C supports arguments -mavx512vl: YES 00:01:24.013 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:24.013 Compiler for C supports arguments -mavx2: YES 00:01:24.013 Compiler for C supports arguments -mavx: YES 00:01:24.013 Message: lib/net: Defining dependency "net" 00:01:24.013 Message: lib/meter: Defining dependency "meter" 00:01:24.013 Message: lib/ethdev: Defining dependency "ethdev" 00:01:24.013 Message: lib/pci: Defining dependency "pci" 00:01:24.013 Message: lib/cmdline: Defining dependency "cmdline" 00:01:24.013 Message: lib/metrics: Defining dependency "metrics" 00:01:24.013 Message: lib/hash: Defining dependency "hash" 00:01:24.013 Message: lib/timer: Defining dependency "timer" 00:01:24.013 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:24.013 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:24.013 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:24.013 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:24.013 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:24.013 Message: lib/acl: Defining dependency "acl" 00:01:24.013 Message: lib/bbdev: Defining dependency "bbdev" 00:01:24.013 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:24.013 Run-time dependency libelf found: YES 0.190 00:01:24.013 Message: lib/bpf: Defining dependency "bpf" 00:01:24.013 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:24.013 Message: lib/compressdev: Defining dependency "compressdev" 00:01:24.013 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:24.013 Message: lib/distributor: Defining dependency "distributor" 00:01:24.013 Message: lib/dmadev: Defining dependency "dmadev" 00:01:24.013 Message: lib/efd: Defining dependency "efd" 00:01:24.013 Message: lib/eventdev: Defining dependency "eventdev" 00:01:24.013 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:24.013 Message: lib/gpudev: Defining dependency "gpudev" 00:01:24.013 Message: lib/gro: Defining dependency "gro" 00:01:24.013 Message: lib/gso: Defining dependency "gso" 00:01:24.013 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:24.013 Message: lib/jobstats: Defining dependency "jobstats" 00:01:24.013 Message: lib/latencystats: Defining dependency "latencystats" 00:01:24.013 Message: lib/lpm: Defining dependency "lpm" 00:01:24.013 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:24.013 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:24.013 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:24.013 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:24.013 Message: lib/member: Defining dependency "member" 00:01:24.013 Message: lib/pcapng: Defining dependency "pcapng" 00:01:24.013 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:24.013 Message: lib/power: Defining dependency "power" 00:01:24.013 Message: lib/rawdev: Defining dependency "rawdev" 00:01:24.013 Message: lib/regexdev: Defining dependency "regexdev" 00:01:24.013 Message: lib/mldev: Defining dependency "mldev" 00:01:24.013 Message: lib/rib: Defining dependency "rib" 00:01:24.013 Message: lib/reorder: Defining dependency "reorder" 00:01:24.013 Message: lib/sched: Defining dependency "sched" 00:01:24.013 Message: lib/security: Defining dependency "security" 00:01:24.013 Message: lib/stack: Defining dependency "stack" 00:01:24.013 Has header "linux/userfaultfd.h" : YES 00:01:24.013 Has header "linux/vduse.h" : YES 00:01:24.013 Message: lib/vhost: Defining dependency "vhost" 00:01:24.013 Message: lib/ipsec: Defining dependency "ipsec" 00:01:24.013 Message: lib/pdcp: Defining dependency "pdcp" 00:01:24.013 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:24.013 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:24.013 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:24.013 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:24.013 Message: lib/fib: Defining dependency "fib" 00:01:24.013 Message: lib/port: Defining dependency "port" 00:01:24.013 Message: lib/pdump: Defining dependency "pdump" 00:01:24.013 Message: lib/table: Defining dependency "table" 00:01:24.013 Message: lib/pipeline: Defining dependency "pipeline" 00:01:24.013 Message: lib/graph: Defining dependency "graph" 00:01:24.013 Message: lib/node: Defining dependency "node" 00:01:24.944 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:24.944 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:24.944 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:24.944 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:24.944 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:24.944 Compiler for C supports arguments -Wno-unused-value: YES 00:01:24.944 Compiler for C supports arguments -Wno-format: YES 00:01:24.944 Compiler for C supports arguments -Wno-format-security: YES 00:01:24.944 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:24.944 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:24.945 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:24.945 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:24.945 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:24.945 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:24.945 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:24.945 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:24.945 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:24.945 Has header "sys/epoll.h" : YES 00:01:24.945 Program doxygen found: YES (/usr/bin/doxygen) 00:01:24.945 Configuring doxy-api-html.conf using configuration 00:01:24.945 Configuring doxy-api-man.conf using configuration 00:01:24.945 Program mandb found: YES (/usr/bin/mandb) 00:01:24.945 Program sphinx-build found: NO 00:01:24.945 Configuring rte_build_config.h using configuration 00:01:24.945 Message: 00:01:24.945 ================= 00:01:24.945 Applications Enabled 00:01:24.945 ================= 00:01:24.945 00:01:24.945 apps: 00:01:24.945 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:24.945 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:24.945 test-pmd, test-regex, test-sad, test-security-perf, 00:01:24.945 00:01:24.945 Message: 00:01:24.945 ================= 00:01:24.945 Libraries Enabled 00:01:24.945 ================= 00:01:24.945 00:01:24.945 libs: 00:01:24.945 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:24.945 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:24.945 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:24.945 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:24.945 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:24.945 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:24.945 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:24.945 graph, node, 00:01:24.945 00:01:24.945 Message: 00:01:24.945 =============== 00:01:24.945 Drivers Enabled 00:01:24.945 =============== 00:01:24.945 00:01:24.945 common: 00:01:24.945 00:01:24.945 bus: 00:01:24.945 pci, vdev, 00:01:24.945 mempool: 00:01:24.945 ring, 00:01:24.945 dma: 00:01:24.945 00:01:24.945 net: 00:01:24.945 i40e, 00:01:24.945 raw: 00:01:24.945 00:01:24.945 crypto: 00:01:24.945 00:01:24.945 compress: 00:01:24.945 00:01:24.945 regex: 00:01:24.945 00:01:24.945 ml: 00:01:24.945 00:01:24.945 vdpa: 00:01:24.945 00:01:24.945 event: 00:01:24.945 00:01:24.945 baseband: 00:01:24.945 00:01:24.945 gpu: 00:01:24.945 00:01:24.945 00:01:24.945 Message: 00:01:24.945 ================= 00:01:24.945 Content Skipped 00:01:24.945 ================= 00:01:24.945 00:01:24.945 apps: 00:01:24.945 00:01:24.945 libs: 00:01:24.945 00:01:24.945 drivers: 00:01:24.945 common/cpt: not in enabled drivers build config 00:01:24.945 common/dpaax: not in enabled drivers build config 00:01:24.945 common/iavf: not in enabled drivers build config 00:01:24.945 common/idpf: not in enabled drivers build config 00:01:24.945 common/ionic: not in enabled drivers build config 00:01:24.945 common/mvep: not in enabled drivers build config 00:01:24.945 common/octeontx: not in enabled drivers build config 00:01:24.945 bus/auxiliary: not in enabled drivers build config 00:01:24.945 bus/cdx: not in enabled drivers build config 00:01:24.945 bus/dpaa: not in enabled drivers build config 00:01:24.945 bus/fslmc: not in enabled drivers build config 00:01:24.945 bus/ifpga: not in enabled drivers build config 00:01:24.945 bus/platform: not in enabled drivers build config 00:01:24.945 bus/uacce: not in enabled drivers build config 00:01:24.945 bus/vmbus: not in enabled drivers build config 00:01:24.945 common/cnxk: not in enabled drivers build config 00:01:24.945 common/mlx5: not in enabled drivers build config 00:01:24.945 common/nfp: not in enabled drivers build config 00:01:24.945 common/nitrox: not in enabled drivers build config 00:01:24.945 common/qat: not in enabled drivers build config 00:01:24.945 common/sfc_efx: not in enabled drivers build config 00:01:24.945 mempool/bucket: not in enabled drivers build config 00:01:24.945 mempool/cnxk: not in enabled drivers build config 00:01:24.945 mempool/dpaa: not in enabled drivers build config 00:01:24.945 mempool/dpaa2: not in enabled drivers build config 00:01:24.945 mempool/octeontx: not in enabled drivers build config 00:01:24.945 mempool/stack: not in enabled drivers build config 00:01:24.945 dma/cnxk: not in enabled drivers build config 00:01:24.945 dma/dpaa: not in enabled drivers build config 00:01:24.945 dma/dpaa2: not in enabled drivers build config 00:01:24.945 dma/hisilicon: not in enabled drivers build config 00:01:24.945 dma/idxd: not in enabled drivers build config 00:01:24.945 dma/ioat: not in enabled drivers build config 00:01:24.945 dma/odm: not in enabled drivers build config 00:01:24.945 dma/skeleton: not in enabled drivers build config 00:01:24.945 net/af_packet: not in enabled drivers build config 00:01:24.945 net/af_xdp: not in enabled drivers build config 00:01:24.945 net/ark: not in enabled drivers build config 00:01:24.945 net/atlantic: not in enabled drivers build config 00:01:24.945 net/avp: not in enabled drivers build config 00:01:24.945 net/axgbe: not in enabled drivers build config 00:01:24.945 net/bnx2x: not in enabled drivers build config 00:01:24.945 net/bnxt: not in enabled drivers build config 00:01:24.945 net/bonding: not in enabled drivers build config 00:01:24.945 net/cnxk: not in enabled drivers build config 00:01:24.945 net/cpfl: not in enabled drivers build config 00:01:24.945 net/cxgbe: not in enabled drivers build config 00:01:24.945 net/dpaa: not in enabled drivers build config 00:01:24.945 net/dpaa2: not in enabled drivers build config 00:01:24.945 net/e1000: not in enabled drivers build config 00:01:24.945 net/ena: not in enabled drivers build config 00:01:24.945 net/enetc: not in enabled drivers build config 00:01:24.945 net/enetfec: not in enabled drivers build config 00:01:24.945 net/enic: not in enabled drivers build config 00:01:24.945 net/failsafe: not in enabled drivers build config 00:01:24.945 net/fm10k: not in enabled drivers build config 00:01:24.945 net/gve: not in enabled drivers build config 00:01:24.945 net/hinic: not in enabled drivers build config 00:01:24.945 net/hns3: not in enabled drivers build config 00:01:24.945 net/iavf: not in enabled drivers build config 00:01:24.945 net/ice: not in enabled drivers build config 00:01:24.945 net/idpf: not in enabled drivers build config 00:01:24.945 net/igc: not in enabled drivers build config 00:01:24.945 net/ionic: not in enabled drivers build config 00:01:24.945 net/ipn3ke: not in enabled drivers build config 00:01:24.945 net/ixgbe: not in enabled drivers build config 00:01:24.945 net/mana: not in enabled drivers build config 00:01:24.945 net/memif: not in enabled drivers build config 00:01:24.945 net/mlx4: not in enabled drivers build config 00:01:24.945 net/mlx5: not in enabled drivers build config 00:01:24.945 net/mvneta: not in enabled drivers build config 00:01:24.945 net/mvpp2: not in enabled drivers build config 00:01:24.945 net/netvsc: not in enabled drivers build config 00:01:24.945 net/nfb: not in enabled drivers build config 00:01:24.945 net/nfp: not in enabled drivers build config 00:01:24.945 net/ngbe: not in enabled drivers build config 00:01:24.945 net/null: not in enabled drivers build config 00:01:24.945 net/octeontx: not in enabled drivers build config 00:01:24.945 net/octeon_ep: not in enabled drivers build config 00:01:24.945 net/pcap: not in enabled drivers build config 00:01:24.945 net/pfe: not in enabled drivers build config 00:01:24.945 net/qede: not in enabled drivers build config 00:01:24.945 net/ring: not in enabled drivers build config 00:01:24.945 net/sfc: not in enabled drivers build config 00:01:24.945 net/softnic: not in enabled drivers build config 00:01:24.945 net/tap: not in enabled drivers build config 00:01:24.945 net/thunderx: not in enabled drivers build config 00:01:24.945 net/txgbe: not in enabled drivers build config 00:01:24.945 net/vdev_netvsc: not in enabled drivers build config 00:01:24.945 net/vhost: not in enabled drivers build config 00:01:24.945 net/virtio: not in enabled drivers build config 00:01:24.945 net/vmxnet3: not in enabled drivers build config 00:01:24.945 raw/cnxk_bphy: not in enabled drivers build config 00:01:24.945 raw/cnxk_gpio: not in enabled drivers build config 00:01:24.945 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:24.945 raw/ifpga: not in enabled drivers build config 00:01:24.945 raw/ntb: not in enabled drivers build config 00:01:24.945 raw/skeleton: not in enabled drivers build config 00:01:24.945 crypto/armv8: not in enabled drivers build config 00:01:24.945 crypto/bcmfs: not in enabled drivers build config 00:01:24.945 crypto/caam_jr: not in enabled drivers build config 00:01:24.945 crypto/ccp: not in enabled drivers build config 00:01:24.945 crypto/cnxk: not in enabled drivers build config 00:01:24.945 crypto/dpaa_sec: not in enabled drivers build config 00:01:24.945 crypto/dpaa2_sec: not in enabled drivers build config 00:01:24.945 crypto/ionic: not in enabled drivers build config 00:01:24.945 crypto/ipsec_mb: not in enabled drivers build config 00:01:24.945 crypto/mlx5: not in enabled drivers build config 00:01:24.945 crypto/mvsam: not in enabled drivers build config 00:01:24.945 crypto/nitrox: not in enabled drivers build config 00:01:24.945 crypto/null: not in enabled drivers build config 00:01:24.945 crypto/octeontx: not in enabled drivers build config 00:01:24.946 crypto/openssl: not in enabled drivers build config 00:01:24.946 crypto/scheduler: not in enabled drivers build config 00:01:24.946 crypto/uadk: not in enabled drivers build config 00:01:24.946 crypto/virtio: not in enabled drivers build config 00:01:24.946 compress/isal: not in enabled drivers build config 00:01:24.946 compress/mlx5: not in enabled drivers build config 00:01:24.946 compress/nitrox: not in enabled drivers build config 00:01:24.946 compress/octeontx: not in enabled drivers build config 00:01:24.946 compress/uadk: not in enabled drivers build config 00:01:24.946 compress/zlib: not in enabled drivers build config 00:01:24.946 regex/mlx5: not in enabled drivers build config 00:01:24.946 regex/cn9k: not in enabled drivers build config 00:01:24.946 ml/cnxk: not in enabled drivers build config 00:01:24.946 vdpa/ifc: not in enabled drivers build config 00:01:24.946 vdpa/mlx5: not in enabled drivers build config 00:01:24.946 vdpa/nfp: not in enabled drivers build config 00:01:24.946 vdpa/sfc: not in enabled drivers build config 00:01:24.946 event/cnxk: not in enabled drivers build config 00:01:24.946 event/dlb2: not in enabled drivers build config 00:01:24.946 event/dpaa: not in enabled drivers build config 00:01:24.946 event/dpaa2: not in enabled drivers build config 00:01:24.946 event/dsw: not in enabled drivers build config 00:01:24.946 event/opdl: not in enabled drivers build config 00:01:24.946 event/skeleton: not in enabled drivers build config 00:01:24.946 event/sw: not in enabled drivers build config 00:01:24.946 event/octeontx: not in enabled drivers build config 00:01:24.946 baseband/acc: not in enabled drivers build config 00:01:24.946 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:24.946 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:24.946 baseband/la12xx: not in enabled drivers build config 00:01:24.946 baseband/null: not in enabled drivers build config 00:01:24.946 baseband/turbo_sw: not in enabled drivers build config 00:01:24.946 gpu/cuda: not in enabled drivers build config 00:01:24.946 00:01:24.946 00:01:24.946 Build targets in project: 224 00:01:24.946 00:01:24.946 DPDK 24.07.0-rc2 00:01:24.946 00:01:24.946 User defined options 00:01:24.946 libdir : lib 00:01:24.946 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.946 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:24.946 c_link_args : 00:01:24.946 enable_docs : false 00:01:24.946 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:24.946 enable_kmods : false 00:01:24.946 machine : native 00:01:24.946 tests : false 00:01:24.946 00:01:24.946 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.946 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:24.946 06:48:54 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:24.946 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:25.212 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:25.212 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:25.212 [3/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:25.212 [4/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:25.212 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:25.212 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:25.212 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:25.212 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:25.212 [9/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:25.212 [10/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:25.212 [11/723] Linking static target lib/librte_kvargs.a 00:01:25.212 [12/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:25.473 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:25.473 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:25.473 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:25.473 [16/723] Linking static target lib/librte_log.a 00:01:25.737 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:25.737 [18/723] Linking static target lib/librte_argparse.a 00:01:25.737 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.000 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.265 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:26.265 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:26.265 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:26.265 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:26.265 [25/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:26.265 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:26.265 [27/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:26.265 [28/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.265 [29/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:26.265 [30/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:26.265 [31/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:26.265 [32/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:26.265 [33/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:26.265 [34/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:26.265 [35/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:26.265 [36/723] Linking target lib/librte_log.so.24.2 00:01:26.265 [37/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:26.265 [38/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:26.265 [39/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:26.265 [40/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:26.265 [41/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:26.265 [42/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:26.265 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:26.526 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:26.526 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:26.526 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:26.526 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:26.526 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:26.526 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:26.526 [50/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:26.526 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:26.526 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:26.526 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:26.526 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:26.526 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:26.526 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:26.526 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:26.526 [58/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:26.526 [59/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:26.526 [60/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:26.526 [61/723] Linking target lib/librte_kvargs.so.24.2 00:01:26.526 [62/723] Linking target lib/librte_argparse.so.24.2 00:01:26.785 [63/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:26.785 [64/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:26.785 [65/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:26.785 [66/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:26.785 [67/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:27.049 [68/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:27.049 [69/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:27.049 [70/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:27.049 [71/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:27.049 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:27.049 [73/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:27.312 [74/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:27.312 [75/723] Linking static target lib/librte_pci.a 00:01:27.312 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:27.312 [77/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:27.312 [78/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:27.312 [79/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:27.312 [80/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:27.312 [81/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:27.572 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:27.572 [83/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:27.572 [84/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:27.572 [85/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:27.572 [86/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:27.572 [87/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:27.572 [88/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:27.572 [89/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:27.572 [90/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.572 [91/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:27.572 [92/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:27.572 [93/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:27.572 [94/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:27.572 [95/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:27.572 [96/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:27.572 [97/723] Linking static target lib/librte_ring.a 00:01:27.572 [98/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:27.572 [99/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:27.572 [100/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:27.572 [101/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:27.833 [102/723] Linking static target lib/librte_meter.a 00:01:27.833 [103/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:27.833 [104/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:27.833 [105/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:27.833 [106/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:27.833 [107/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:27.833 [108/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:27.833 [109/723] Linking static target lib/librte_telemetry.a 00:01:27.833 [110/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:27.833 [111/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:27.833 [112/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:27.833 [113/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:28.096 [114/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:28.097 [115/723] Linking static target lib/librte_net.a 00:01:28.097 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:28.097 [117/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:28.097 [118/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.097 [119/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.097 [120/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:28.097 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:28.097 [122/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:28.097 [123/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:28.097 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:28.359 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:28.359 [126/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:28.359 [127/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.622 [128/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:28.622 [129/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:28.622 [130/723] Linking static target lib/librte_mempool.a 00:01:28.622 [131/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:28.622 [132/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.622 [133/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:28.622 [134/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:28.622 [135/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:28.622 [136/723] Linking target lib/librte_telemetry.so.24.2 00:01:28.622 [137/723] Linking static target lib/librte_cmdline.a 00:01:28.622 [138/723] Linking static target lib/librte_eal.a 00:01:28.622 [139/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:28.622 [140/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:28.885 [141/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:28.885 [142/723] Linking static target lib/librte_cfgfile.a 00:01:28.885 [143/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:28.885 [144/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:28.885 [145/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:28.885 [146/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:28.885 [147/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:28.885 [148/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:28.885 [149/723] Linking static target lib/librte_metrics.a 00:01:28.885 [150/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:28.885 [151/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:28.885 [152/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:29.150 [153/723] Linking static target lib/librte_rcu.a 00:01:29.150 [154/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:29.150 [155/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:29.150 [156/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:29.150 [157/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:29.150 [158/723] Linking static target lib/librte_bitratestats.a 00:01:29.411 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:29.411 [160/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.411 [161/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:29.411 [162/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:29.411 [163/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:29.411 [164/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:29.411 [165/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:29.411 [166/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:29.411 [167/723] Linking static target lib/librte_mbuf.a 00:01:29.411 [168/723] Linking static target lib/librte_timer.a 00:01:29.411 [169/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.411 [170/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.411 [171/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.411 [172/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.672 [173/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:29.673 [174/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:29.673 [175/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:29.673 [176/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:29.673 [177/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:29.673 [178/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:29.935 [179/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:29.935 [180/723] Linking static target lib/librte_bbdev.a 00:01:29.935 [181/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:29.935 [182/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.935 [183/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:29.935 [184/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:29.935 [185/723] Linking static target lib/librte_compressdev.a 00:01:29.935 [186/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:29.935 [187/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:29.935 [188/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:30.198 [189/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.198 [190/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:30.198 [191/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:30.198 [192/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:30.461 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.726 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:30.726 [195/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:30.726 [196/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:30.726 [197/723] Linking static target lib/librte_dmadev.a 00:01:30.726 [198/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.726 [199/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:30.726 [200/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:30.726 [201/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.726 [202/723] Linking static target lib/librte_distributor.a 00:01:30.986 [203/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:30.986 [204/723] Linking static target lib/librte_bpf.a 00:01:30.986 [205/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:30.986 [206/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:30.986 [207/723] Linking static target lib/librte_dispatcher.a 00:01:30.986 [208/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:30.986 [209/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:30.986 [210/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:30.986 [211/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:30.986 [212/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:30.986 [213/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:31.251 [214/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:31.251 [215/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:31.251 [216/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:31.251 [217/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:31.251 [218/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:31.251 [219/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:31.251 [220/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:31.251 [221/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:31.251 [222/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:31.251 [223/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.251 [224/723] Linking static target lib/librte_gpudev.a 00:01:31.251 [225/723] Linking static target lib/librte_gro.a 00:01:31.251 [226/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:31.251 [227/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:31.251 [228/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:31.251 [229/723] Linking static target lib/librte_jobstats.a 00:01:31.512 [230/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.512 [231/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:31.512 [232/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:31.512 [233/723] Linking static target lib/librte_gso.a 00:01:31.512 [234/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:31.512 [235/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.776 [236/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.776 [237/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:31.776 [238/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.776 [239/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:31.776 [240/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:31.776 [241/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.776 [242/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:31.776 [243/723] Linking static target lib/librte_ip_frag.a 00:01:31.776 [244/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:31.776 [245/723] Linking static target lib/librte_latencystats.a 00:01:31.776 [246/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.040 [247/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:32.040 [248/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:32.040 [249/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:32.040 [250/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:32.040 [251/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:32.040 [252/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:32.040 [253/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:32.303 [254/723] Linking static target lib/librte_efd.a 00:01:32.303 [255/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:32.303 [256/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:32.303 [257/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.303 [258/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.303 [259/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:32.303 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:32.303 [261/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:32.303 [262/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:32.565 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:32.565 [264/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:32.565 [265/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.565 [266/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:32.565 [267/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:32.565 [268/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.826 [269/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:32.826 [270/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:32.826 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:32.826 [272/723] Linking static target lib/librte_regexdev.a 00:01:32.826 [273/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:32.826 [274/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:32.826 [275/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:32.826 [276/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:32.826 [277/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:32.826 [278/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:32.826 [279/723] Linking static target lib/librte_rawdev.a 00:01:32.826 [280/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:32.826 [281/723] Linking static target lib/librte_pcapng.a 00:01:33.089 [282/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:33.089 [283/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:33.089 [284/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:33.089 [285/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:33.089 [286/723] Linking static target lib/librte_power.a 00:01:33.089 [287/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:33.089 [288/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:33.089 [289/723] Linking static target lib/librte_mldev.a 00:01:33.089 [290/723] Linking static target lib/librte_stack.a 00:01:33.089 [291/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:33.089 [292/723] Linking static target lib/librte_lpm.a 00:01:33.354 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:33.354 [294/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.354 [295/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:33.354 [296/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:33.354 [297/723] Linking static target lib/acl/libavx2_tmp.a 00:01:33.354 [298/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:33.354 [299/723] Linking static target lib/librte_reorder.a 00:01:33.354 [300/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:33.354 [301/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:33.354 [302/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:33.354 [303/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.354 [304/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:33.354 [305/723] Linking static target lib/librte_cryptodev.a 00:01:33.354 [306/723] Linking static target lib/librte_security.a 00:01:33.612 [307/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.612 [308/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:33.612 [309/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:33.612 [310/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.612 [311/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:33.612 [312/723] Linking static target lib/librte_hash.a 00:01:33.875 [313/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:33.875 [314/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.875 [315/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:33.875 [316/723] Linking static target lib/librte_rib.a 00:01:33.875 [317/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:33.875 [318/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:33.875 [319/723] Linking static target lib/acl/libavx512_tmp.a 00:01:33.875 [320/723] Linking static target lib/librte_acl.a 00:01:33.875 [321/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.875 [322/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:33.875 [323/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:33.875 [324/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:33.875 [325/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:34.136 [326/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:34.136 [327/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:34.136 [328/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.136 [329/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:34.136 [330/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.136 [331/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:34.136 [332/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:34.136 [333/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:34.136 [334/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:34.136 [335/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:34.398 [336/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:34.398 [337/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:34.398 [338/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:34.398 [339/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.398 [340/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:34.662 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.662 [342/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:34.662 [343/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.923 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:35.186 [345/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:35.186 [346/723] Linking static target lib/librte_eventdev.a 00:01:35.186 [347/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:35.186 [348/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:35.186 [349/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:35.186 [350/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:35.444 [351/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:35.444 [352/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:35.444 [353/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:35.444 [354/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:35.444 [355/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:35.444 [356/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.444 [357/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:35.444 [358/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:35.444 [359/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.444 [360/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:35.444 [361/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:35.444 [362/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:35.704 [363/723] Linking static target lib/librte_member.a 00:01:35.704 [364/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:35.704 [365/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:35.704 [366/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:35.704 [367/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:35.704 [368/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:35.704 [369/723] Linking static target lib/librte_fib.a 00:01:35.704 [370/723] Linking static target lib/librte_sched.a 00:01:35.704 [371/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:35.704 [372/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:35.704 [373/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:35.704 [374/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:35.704 [375/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:35.704 [376/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:35.971 [377/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:35.971 [378/723] Linking static target lib/librte_ethdev.a 00:01:35.971 [379/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:35.971 [380/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:35.971 [381/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:35.971 [382/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:35.971 [383/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.235 [384/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:36.236 [385/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:36.236 [386/723] Linking static target lib/librte_ipsec.a 00:01:36.236 [387/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.236 [388/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:36.236 [389/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.236 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:36.497 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:36.497 [392/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:36.497 [393/723] Linking static target lib/librte_pdump.a 00:01:36.787 [394/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:36.787 [395/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:36.787 [396/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:36.787 [397/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:36.787 [398/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.787 [399/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:36.787 [400/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:36.787 [401/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:36.787 [402/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:36.787 [403/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:36.787 [404/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:36.787 [405/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:36.787 [406/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:37.060 [407/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:37.060 [408/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:37.060 [409/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.060 [410/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:37.060 [411/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:37.060 [412/723] Linking static target lib/librte_pdcp.a 00:01:37.060 [413/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:37.060 [414/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:37.060 [415/723] Linking static target lib/librte_table.a 00:01:37.321 [416/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:37.321 [417/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:37.321 [418/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:37.321 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:37.321 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:37.581 [421/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:37.581 [422/723] Linking static target lib/librte_graph.a 00:01:37.581 [423/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:37.581 [424/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.847 [425/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:37.847 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:37.847 [427/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:37.847 [428/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:37.847 [429/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:37.847 [430/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:37.847 [431/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:37.847 [432/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:38.108 [433/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:38.108 [434/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:38.108 [435/723] Linking static target lib/librte_port.a 00:01:38.108 [436/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:38.108 [437/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:38.108 [438/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:38.108 [439/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:38.370 [440/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.370 [441/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:38.370 [442/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:38.370 [443/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:38.370 [444/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.370 [445/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.631 [446/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:38.631 [447/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:38.631 [448/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.631 [449/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.631 [450/723] Linking static target drivers/librte_bus_vdev.a 00:01:38.631 [451/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:38.631 [452/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:38.631 [453/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:38.631 [454/723] Linking static target lib/librte_node.a 00:01:38.893 [455/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:38.893 [456/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:38.893 [457/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:38.893 [458/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:38.893 [459/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.893 [460/723] Linking static target drivers/librte_bus_pci.a 00:01:38.893 [461/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.893 [462/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.893 [463/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:38.893 [464/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:38.894 [465/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.161 [466/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:39.161 [467/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:39.161 [468/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:39.161 [469/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:39.161 [470/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:39.161 [471/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:39.161 [472/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:39.161 [473/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:39.161 [474/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.420 [475/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:39.420 [476/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.420 [477/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:39.421 [478/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:39.690 [479/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:39.690 [480/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:39.690 [481/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:39.690 [482/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.690 [483/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.690 [484/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.690 [485/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:39.690 [486/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.690 [487/723] Linking static target drivers/librte_mempool_ring.a 00:01:39.690 [488/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:39.690 [489/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.690 [490/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:39.690 [491/723] Linking target lib/librte_eal.so.24.2 00:01:39.690 [492/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:39.690 [493/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:39.951 [494/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:39.951 [495/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:39.951 [496/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:39.951 [497/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:39.951 [498/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:40.213 [499/723] Linking target lib/librte_ring.so.24.2 00:01:40.213 [500/723] Linking target lib/librte_meter.so.24.2 00:01:40.213 [501/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:40.213 [502/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:40.213 [503/723] Linking target lib/librte_pci.so.24.2 00:01:40.213 [504/723] Linking target lib/librte_timer.so.24.2 00:01:40.213 [505/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:40.479 [506/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:40.479 [507/723] Linking target lib/librte_acl.so.24.2 00:01:40.479 [508/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:40.479 [509/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:40.479 [510/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:40.479 [511/723] Linking target lib/librte_rcu.so.24.2 00:01:40.479 [512/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:40.479 [513/723] Linking target lib/librte_mempool.so.24.2 00:01:40.479 [514/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:40.479 [515/723] Linking target lib/librte_cfgfile.so.24.2 00:01:40.479 [516/723] Linking target lib/librte_dmadev.so.24.2 00:01:40.479 [517/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:40.479 [518/723] Linking target lib/librte_jobstats.so.24.2 00:01:40.479 [519/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:40.479 [520/723] Linking target lib/librte_rawdev.so.24.2 00:01:40.479 [521/723] Linking target lib/librte_stack.so.24.2 00:01:40.479 [522/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:40.479 [523/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:40.479 [524/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:40.479 [525/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:40.479 [526/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:40.739 [527/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:40.739 [528/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:40.739 [529/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:40.739 [530/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:40.739 [531/723] Linking target lib/librte_mbuf.so.24.2 00:01:40.739 [532/723] Linking target lib/librte_rib.so.24.2 00:01:40.739 [533/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:40.739 [534/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:40.739 [535/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:40.739 [536/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:40.739 [537/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:40.739 [538/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:41.004 [539/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:41.004 [540/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:41.004 [541/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:41.004 [542/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:41.004 [543/723] Linking target lib/librte_net.so.24.2 00:01:41.004 [544/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:41.265 [545/723] Linking target lib/librte_bbdev.so.24.2 00:01:41.265 [546/723] Linking target lib/librte_compressdev.so.24.2 00:01:41.265 [547/723] Linking target lib/librte_distributor.so.24.2 00:01:41.265 [548/723] Linking target lib/librte_cryptodev.so.24.2 00:01:41.265 [549/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:41.265 [550/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:41.265 [551/723] Linking target lib/librte_gpudev.so.24.2 00:01:41.265 [552/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:41.265 [553/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:41.265 [554/723] Linking target lib/librte_mldev.so.24.2 00:01:41.265 [555/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:41.265 [556/723] Linking target lib/librte_regexdev.so.24.2 00:01:41.265 [557/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:41.265 [558/723] Linking target lib/librte_reorder.so.24.2 00:01:41.266 [559/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:41.266 [560/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:41.266 [561/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:41.266 [562/723] Linking target lib/librte_sched.so.24.2 00:01:41.266 [563/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:41.266 [564/723] Linking target lib/librte_fib.so.24.2 00:01:41.266 [565/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:41.266 [566/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:41.266 [567/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:41.266 [568/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:41.527 [569/723] Linking target lib/librte_hash.so.24.2 00:01:41.527 [570/723] Linking target lib/librte_cmdline.so.24.2 00:01:41.527 [571/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:41.527 [572/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:41.527 [573/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:41.527 [574/723] Linking target lib/librte_security.so.24.2 00:01:41.527 [575/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:41.527 [576/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:41.527 [577/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:41.527 [578/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:41.527 [579/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:41.527 [580/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:41.527 [581/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:41.527 [582/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:41.789 [583/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:41.789 [584/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:41.789 [585/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:41.789 [586/723] Linking target lib/librte_efd.so.24.2 00:01:41.789 [587/723] Linking target lib/librte_lpm.so.24.2 00:01:41.789 [588/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:41.789 [589/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:41.789 [590/723] Linking target lib/librte_member.so.24.2 00:01:41.789 [591/723] Linking target lib/librte_ipsec.so.24.2 00:01:41.789 [592/723] Linking target lib/librte_pdcp.so.24.2 00:01:42.047 [593/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:42.047 [594/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:42.047 [595/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:42.047 [596/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:42.047 [597/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:42.307 [598/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:42.307 [599/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:42.307 [600/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:42.307 [601/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:42.307 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:42.568 [603/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:42.568 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:42.568 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:42.568 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:42.568 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:42.568 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:42.829 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:42.829 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:42.829 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:42.829 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:42.829 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:42.829 [614/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:42.829 [615/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:43.088 [616/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:43.088 [617/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:43.088 [618/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:43.088 [619/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:43.088 [620/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:43.088 [621/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:43.088 [622/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:43.347 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:43.347 [624/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:43.347 [625/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:43.604 [626/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:43.604 [627/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:43.604 [628/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:43.604 [629/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:43.604 [630/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:43.860 [631/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:43.860 [632/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:43.860 [633/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:43.860 [634/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:43.860 [635/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:43.860 [636/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:43.860 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:43.860 [638/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.860 [639/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:43.860 [640/723] Linking target lib/librte_ethdev.so.24.2 00:01:44.116 [641/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:44.116 [642/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:44.116 [643/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:44.116 [644/723] Linking target lib/librte_pcapng.so.24.2 00:01:44.116 [645/723] Linking target lib/librte_bpf.so.24.2 00:01:44.116 [646/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:44.116 [647/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:44.116 [648/723] Linking target lib/librte_metrics.so.24.2 00:01:44.116 [649/723] Linking target lib/librte_gso.so.24.2 00:01:44.116 [650/723] Linking target lib/librte_ip_frag.so.24.2 00:01:44.117 [651/723] Linking target lib/librte_eventdev.so.24.2 00:01:44.117 [652/723] Linking target lib/librte_gro.so.24.2 00:01:44.117 [653/723] Linking target lib/librte_power.so.24.2 00:01:44.373 [654/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:44.373 [655/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:44.373 [656/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:44.373 [657/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:44.373 [658/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:44.373 [659/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:44.373 [660/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:44.373 [661/723] Linking target lib/librte_graph.so.24.2 00:01:44.373 [662/723] Linking target lib/librte_pdump.so.24.2 00:01:44.373 [663/723] Linking target lib/librte_bitratestats.so.24.2 00:01:44.373 [664/723] Linking target lib/librte_latencystats.so.24.2 00:01:44.373 [665/723] Linking target lib/librte_port.so.24.2 00:01:44.373 [666/723] Linking target lib/librte_dispatcher.so.24.2 00:01:44.373 [667/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:44.373 [668/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:44.373 [669/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:44.631 [670/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:44.631 [671/723] Linking target lib/librte_node.so.24.2 00:01:44.631 [672/723] Linking target lib/librte_table.so.24.2 00:01:44.631 [673/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:44.631 [674/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:44.631 [675/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:44.889 [676/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:44.889 [677/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:44.889 [678/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:45.454 [679/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:45.454 [680/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:45.454 [681/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:45.454 [682/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:45.712 [683/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:45.712 [684/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:45.712 [685/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:45.712 [686/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:45.712 [687/723] Linking static target drivers/librte_net_i40e.a 00:01:45.712 [688/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:45.970 [689/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:46.227 [690/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.485 [691/723] Linking target drivers/librte_net_i40e.so.24.2 00:01:46.742 [692/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:47.000 [693/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:47.258 [694/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:48.632 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:56.736 [696/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.736 [697/723] Linking static target lib/librte_vhost.a 00:01:56.994 [698/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.253 [699/723] Linking target lib/librte_vhost.so.24.2 00:01:57.253 [700/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:57.253 [701/723] Linking static target lib/librte_pipeline.a 00:01:57.820 [702/723] Linking target app/dpdk-test-acl 00:01:57.820 [703/723] Linking target app/dpdk-test-dma-perf 00:01:57.820 [704/723] Linking target app/dpdk-test-regex 00:01:57.820 [705/723] Linking target app/dpdk-test-cmdline 00:01:57.820 [706/723] Linking target app/dpdk-test-bbdev 00:01:57.820 [707/723] Linking target app/dpdk-test-fib 00:01:57.820 [708/723] Linking target app/dpdk-proc-info 00:01:57.820 [709/723] Linking target app/dpdk-graph 00:01:57.820 [710/723] Linking target app/dpdk-dumpcap 00:01:57.820 [711/723] Linking target app/dpdk-test-mldev 00:01:57.820 [712/723] Linking target app/dpdk-pdump 00:01:57.820 [713/723] Linking target app/dpdk-test-gpudev 00:01:57.820 [714/723] Linking target app/dpdk-test-sad 00:01:57.820 [715/723] Linking target app/dpdk-test-pipeline 00:01:57.820 [716/723] Linking target app/dpdk-test-security-perf 00:01:57.820 [717/723] Linking target app/dpdk-test-flow-perf 00:01:57.820 [718/723] Linking target app/dpdk-test-compress-perf 00:01:57.820 [719/723] Linking target app/dpdk-test-crypto-perf 00:01:58.077 [720/723] Linking target app/dpdk-test-eventdev 00:01:58.077 [721/723] Linking target app/dpdk-testpmd 00:01:59.977 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.977 [723/723] Linking target lib/librte_pipeline.so.24.2 00:01:59.977 06:49:29 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:01:59.977 06:49:29 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:59.977 06:49:29 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:00.234 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:00.234 [0/1] Installing files. 00:02:00.495 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:00.495 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:00.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:00.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:00.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:00.501 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:00.501 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.501 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:00.502 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:01.068 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:01.068 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:01.068 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.068 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:01.068 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.070 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.071 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:01.072 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:01.072 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:01.072 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:01.072 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:01.072 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:01.072 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:01.072 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:01.072 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:01.072 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:01.072 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:01.072 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:01.072 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:01.072 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:01.072 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:01.072 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:01.072 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:01.072 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:01.072 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:01.072 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:01.072 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:01.072 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:01.072 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:01.072 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:01.072 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:01.072 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:01.072 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:01.072 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:01.072 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:01.072 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:01.072 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:01.072 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:01.072 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:01.072 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:01.072 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:01.072 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:01.072 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:01.072 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:01.072 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:01.072 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:01.072 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:01.072 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:01.072 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:01.072 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:01.072 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:01.072 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:01.072 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:01.072 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:01.072 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:01.073 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:01.073 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:01.073 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:01.073 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:01.073 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:01.073 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:01.073 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:01.073 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:01.073 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:01.073 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:01.073 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:01.073 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:01.073 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:01.073 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:01.073 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:01.073 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:01.073 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:01.073 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:01.073 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:01.073 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:01.073 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:01.073 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:01.073 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:01.073 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:01.073 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:01.073 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:01.073 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:01.073 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:01.073 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:01.073 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:01.073 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:01.073 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:01.073 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:01.073 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:01.073 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:01.073 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:01.073 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:01.073 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:01.073 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:01.073 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:01.073 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:01.073 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:01.073 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:01.073 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:01.073 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:01.073 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:01.073 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:01.073 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:01.073 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:01.073 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:01.073 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:01.073 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:01.073 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:01.073 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:01.073 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:01.073 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:01.073 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:01.073 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:01.073 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:01.073 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:01.073 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:01.073 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:01.073 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:01.073 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:01.073 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:01.073 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:01.073 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:01.073 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:01.073 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:01.073 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:01.073 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:01.073 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:01.073 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:01.073 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:01.073 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:01.073 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:01.073 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:01.073 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:01.073 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:01.073 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:01.073 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:01.073 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:01.073 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:01.073 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:01.073 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:01.073 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:01.073 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:01.073 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:01.073 06:49:30 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:01.073 06:49:30 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:01.073 00:02:01.073 real 0m41.357s 00:02:01.073 user 13m56.191s 00:02:01.073 sys 2m0.578s 00:02:01.073 06:49:30 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:01.073 06:49:30 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:01.073 ************************************ 00:02:01.073 END TEST build_native_dpdk 00:02:01.073 ************************************ 00:02:01.073 06:49:30 -- common/autotest_common.sh@1142 -- $ return 0 00:02:01.073 06:49:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:01.073 06:49:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:01.073 06:49:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:01.073 06:49:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:01.073 06:49:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:01.073 06:49:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:01.073 06:49:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:01.073 06:49:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:01.073 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:01.331 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:01.331 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:01.331 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:01.588 Using 'verbs' RDMA provider 00:02:12.125 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:20.249 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:20.506 Creating mk/config.mk...done. 00:02:20.506 Creating mk/cc.flags.mk...done. 00:02:20.506 Type 'make' to build. 00:02:20.507 06:49:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:20.507 06:49:49 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:20.507 06:49:49 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:20.507 06:49:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.507 ************************************ 00:02:20.507 START TEST make 00:02:20.507 ************************************ 00:02:20.507 06:49:49 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:20.764 make[1]: Nothing to be done for 'all'. 00:02:22.676 The Meson build system 00:02:22.676 Version: 1.3.1 00:02:22.676 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:22.676 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:22.676 Build type: native build 00:02:22.676 Project name: libvfio-user 00:02:22.676 Project version: 0.0.1 00:02:22.676 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:22.676 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:22.676 Host machine cpu family: x86_64 00:02:22.676 Host machine cpu: x86_64 00:02:22.676 Run-time dependency threads found: YES 00:02:22.676 Library dl found: YES 00:02:22.676 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:22.676 Run-time dependency json-c found: YES 0.17 00:02:22.676 Run-time dependency cmocka found: YES 1.1.7 00:02:22.676 Program pytest-3 found: NO 00:02:22.676 Program flake8 found: NO 00:02:22.676 Program misspell-fixer found: NO 00:02:22.676 Program restructuredtext-lint found: NO 00:02:22.676 Program valgrind found: YES (/usr/bin/valgrind) 00:02:22.676 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.676 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.676 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.676 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:22.676 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:22.676 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:22.676 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:22.676 Build targets in project: 8 00:02:22.676 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:22.676 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:22.676 00:02:22.676 libvfio-user 0.0.1 00:02:22.676 00:02:22.676 User defined options 00:02:22.676 buildtype : debug 00:02:22.676 default_library: shared 00:02:22.676 libdir : /usr/local/lib 00:02:22.676 00:02:22.677 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.256 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:23.256 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:23.522 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:23.522 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:23.522 [4/37] Compiling C object samples/null.p/null.c.o 00:02:23.522 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:23.522 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:23.522 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:23.522 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:23.522 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:23.522 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:23.522 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:23.522 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:23.522 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:23.522 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:23.522 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:23.522 [16/37] Compiling C object samples/client.p/client.c.o 00:02:23.522 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:23.522 [18/37] Compiling C object samples/server.p/server.c.o 00:02:23.522 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:23.522 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:23.522 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:23.522 [22/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:23.522 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:23.522 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:23.522 [25/37] Linking target samples/client 00:02:23.522 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:23.522 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:23.522 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:23.780 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:23.781 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:23.781 [31/37] Linking target test/unit_tests 00:02:23.781 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:24.042 [33/37] Linking target samples/null 00:02:24.042 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:24.042 [35/37] Linking target samples/lspci 00:02:24.042 [36/37] Linking target samples/server 00:02:24.042 [37/37] Linking target samples/gpio-pci-idio-16 00:02:24.042 INFO: autodetecting backend as ninja 00:02:24.042 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:24.042 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:24.623 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:24.623 ninja: no work to do. 00:02:36.816 CC lib/ut/ut.o 00:02:36.816 CC lib/log/log.o 00:02:36.816 CC lib/log/log_flags.o 00:02:36.816 CC lib/log/log_deprecated.o 00:02:36.816 CC lib/ut_mock/mock.o 00:02:36.816 LIB libspdk_log.a 00:02:36.816 LIB libspdk_ut.a 00:02:36.816 LIB libspdk_ut_mock.a 00:02:36.816 SO libspdk_ut.so.2.0 00:02:36.816 SO libspdk_ut_mock.so.6.0 00:02:36.816 SO libspdk_log.so.7.0 00:02:36.816 SYMLINK libspdk_ut.so 00:02:36.816 SYMLINK libspdk_ut_mock.so 00:02:37.074 SYMLINK libspdk_log.so 00:02:37.074 CC lib/dma/dma.o 00:02:37.074 CC lib/ioat/ioat.o 00:02:37.074 CXX lib/trace_parser/trace.o 00:02:37.074 CC lib/util/base64.o 00:02:37.074 CC lib/util/bit_array.o 00:02:37.074 CC lib/util/cpuset.o 00:02:37.074 CC lib/util/crc16.o 00:02:37.074 CC lib/util/crc32.o 00:02:37.074 CC lib/util/crc32c.o 00:02:37.074 CC lib/util/crc32_ieee.o 00:02:37.074 CC lib/util/crc64.o 00:02:37.074 CC lib/util/dif.o 00:02:37.074 CC lib/util/fd.o 00:02:37.074 CC lib/util/file.o 00:02:37.074 CC lib/util/hexlify.o 00:02:37.074 CC lib/util/iov.o 00:02:37.074 CC lib/util/math.o 00:02:37.074 CC lib/util/pipe.o 00:02:37.074 CC lib/util/strerror_tls.o 00:02:37.074 CC lib/util/string.o 00:02:37.074 CC lib/util/uuid.o 00:02:37.074 CC lib/util/fd_group.o 00:02:37.074 CC lib/util/xor.o 00:02:37.074 CC lib/util/zipf.o 00:02:37.332 CC lib/vfio_user/host/vfio_user_pci.o 00:02:37.332 CC lib/vfio_user/host/vfio_user.o 00:02:37.332 LIB libspdk_dma.a 00:02:37.332 SO libspdk_dma.so.4.0 00:02:37.332 SYMLINK libspdk_dma.so 00:02:37.590 LIB libspdk_vfio_user.a 00:02:37.590 LIB libspdk_ioat.a 00:02:37.590 SO libspdk_vfio_user.so.5.0 00:02:37.590 SO libspdk_ioat.so.7.0 00:02:37.590 SYMLINK libspdk_ioat.so 00:02:37.590 SYMLINK libspdk_vfio_user.so 00:02:37.590 LIB libspdk_util.a 00:02:37.848 SO libspdk_util.so.9.1 00:02:37.848 SYMLINK libspdk_util.so 00:02:38.107 CC lib/vmd/vmd.o 00:02:38.107 CC lib/conf/conf.o 00:02:38.107 CC lib/vmd/led.o 00:02:38.107 CC lib/rdma_provider/common.o 00:02:38.107 CC lib/rdma_utils/rdma_utils.o 00:02:38.107 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:38.107 CC lib/json/json_parse.o 00:02:38.107 CC lib/env_dpdk/env.o 00:02:38.107 CC lib/json/json_util.o 00:02:38.107 CC lib/env_dpdk/memory.o 00:02:38.107 CC lib/json/json_write.o 00:02:38.107 CC lib/env_dpdk/pci.o 00:02:38.107 CC lib/idxd/idxd.o 00:02:38.107 CC lib/env_dpdk/init.o 00:02:38.107 CC lib/idxd/idxd_user.o 00:02:38.107 CC lib/env_dpdk/threads.o 00:02:38.107 CC lib/idxd/idxd_kernel.o 00:02:38.107 CC lib/env_dpdk/pci_ioat.o 00:02:38.107 CC lib/env_dpdk/pci_virtio.o 00:02:38.107 CC lib/env_dpdk/pci_vmd.o 00:02:38.107 CC lib/env_dpdk/pci_idxd.o 00:02:38.107 CC lib/env_dpdk/pci_event.o 00:02:38.107 CC lib/env_dpdk/sigbus_handler.o 00:02:38.107 CC lib/env_dpdk/pci_dpdk.o 00:02:38.107 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:38.107 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:38.107 LIB libspdk_trace_parser.a 00:02:38.107 SO libspdk_trace_parser.so.5.0 00:02:38.365 LIB libspdk_rdma_provider.a 00:02:38.365 SYMLINK libspdk_trace_parser.so 00:02:38.365 SO libspdk_rdma_provider.so.6.0 00:02:38.365 LIB libspdk_conf.a 00:02:38.365 SO libspdk_conf.so.6.0 00:02:38.365 LIB libspdk_rdma_utils.a 00:02:38.365 SYMLINK libspdk_rdma_provider.so 00:02:38.365 SO libspdk_rdma_utils.so.1.0 00:02:38.365 SYMLINK libspdk_conf.so 00:02:38.365 SYMLINK libspdk_rdma_utils.so 00:02:38.365 LIB libspdk_json.a 00:02:38.622 SO libspdk_json.so.6.0 00:02:38.622 SYMLINK libspdk_json.so 00:02:38.622 LIB libspdk_idxd.a 00:02:38.622 SO libspdk_idxd.so.12.0 00:02:38.622 CC lib/jsonrpc/jsonrpc_server.o 00:02:38.622 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:38.622 CC lib/jsonrpc/jsonrpc_client.o 00:02:38.622 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:38.622 SYMLINK libspdk_idxd.so 00:02:38.622 LIB libspdk_vmd.a 00:02:38.891 SO libspdk_vmd.so.6.0 00:02:38.891 SYMLINK libspdk_vmd.so 00:02:38.891 LIB libspdk_jsonrpc.a 00:02:38.891 SO libspdk_jsonrpc.so.6.0 00:02:39.197 SYMLINK libspdk_jsonrpc.so 00:02:39.197 CC lib/rpc/rpc.o 00:02:39.454 LIB libspdk_rpc.a 00:02:39.454 SO libspdk_rpc.so.6.0 00:02:39.454 LIB libspdk_env_dpdk.a 00:02:39.454 SYMLINK libspdk_rpc.so 00:02:39.454 SO libspdk_env_dpdk.so.14.1 00:02:39.710 CC lib/trace/trace.o 00:02:39.710 CC lib/trace/trace_flags.o 00:02:39.710 CC lib/notify/notify.o 00:02:39.710 CC lib/keyring/keyring.o 00:02:39.710 CC lib/trace/trace_rpc.o 00:02:39.710 CC lib/notify/notify_rpc.o 00:02:39.710 CC lib/keyring/keyring_rpc.o 00:02:39.710 SYMLINK libspdk_env_dpdk.so 00:02:39.968 LIB libspdk_notify.a 00:02:39.968 SO libspdk_notify.so.6.0 00:02:39.968 LIB libspdk_keyring.a 00:02:39.968 SYMLINK libspdk_notify.so 00:02:39.968 LIB libspdk_trace.a 00:02:39.968 SO libspdk_keyring.so.1.0 00:02:39.968 SO libspdk_trace.so.10.0 00:02:39.968 SYMLINK libspdk_keyring.so 00:02:39.968 SYMLINK libspdk_trace.so 00:02:40.225 CC lib/thread/thread.o 00:02:40.225 CC lib/thread/iobuf.o 00:02:40.225 CC lib/sock/sock.o 00:02:40.225 CC lib/sock/sock_rpc.o 00:02:40.483 LIB libspdk_sock.a 00:02:40.740 SO libspdk_sock.so.10.0 00:02:40.740 SYMLINK libspdk_sock.so 00:02:40.740 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:40.740 CC lib/nvme/nvme_ctrlr.o 00:02:40.740 CC lib/nvme/nvme_fabric.o 00:02:40.740 CC lib/nvme/nvme_ns_cmd.o 00:02:40.740 CC lib/nvme/nvme_ns.o 00:02:40.740 CC lib/nvme/nvme_pcie_common.o 00:02:40.740 CC lib/nvme/nvme_pcie.o 00:02:40.740 CC lib/nvme/nvme_qpair.o 00:02:40.741 CC lib/nvme/nvme.o 00:02:40.741 CC lib/nvme/nvme_quirks.o 00:02:40.741 CC lib/nvme/nvme_transport.o 00:02:40.741 CC lib/nvme/nvme_discovery.o 00:02:40.741 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.741 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.741 CC lib/nvme/nvme_tcp.o 00:02:40.741 CC lib/nvme/nvme_opal.o 00:02:40.741 CC lib/nvme/nvme_io_msg.o 00:02:40.741 CC lib/nvme/nvme_poll_group.o 00:02:40.741 CC lib/nvme/nvme_zns.o 00:02:40.741 CC lib/nvme/nvme_stubs.o 00:02:40.741 CC lib/nvme/nvme_auth.o 00:02:40.741 CC lib/nvme/nvme_cuse.o 00:02:40.741 CC lib/nvme/nvme_vfio_user.o 00:02:40.741 CC lib/nvme/nvme_rdma.o 00:02:41.675 LIB libspdk_thread.a 00:02:41.675 SO libspdk_thread.so.10.1 00:02:41.933 SYMLINK libspdk_thread.so 00:02:41.933 CC lib/init/json_config.o 00:02:41.933 CC lib/virtio/virtio.o 00:02:41.933 CC lib/virtio/virtio_vhost_user.o 00:02:41.933 CC lib/accel/accel.o 00:02:41.933 CC lib/init/subsystem.o 00:02:41.933 CC lib/virtio/virtio_vfio_user.o 00:02:41.933 CC lib/accel/accel_rpc.o 00:02:41.933 CC lib/virtio/virtio_pci.o 00:02:41.933 CC lib/blob/blobstore.o 00:02:41.933 CC lib/accel/accel_sw.o 00:02:41.933 CC lib/vfu_tgt/tgt_endpoint.o 00:02:41.933 CC lib/blob/request.o 00:02:41.933 CC lib/vfu_tgt/tgt_rpc.o 00:02:41.933 CC lib/init/subsystem_rpc.o 00:02:41.933 CC lib/blob/zeroes.o 00:02:41.933 CC lib/init/rpc.o 00:02:41.933 CC lib/blob/blob_bs_dev.o 00:02:42.192 LIB libspdk_init.a 00:02:42.451 SO libspdk_init.so.5.0 00:02:42.451 LIB libspdk_virtio.a 00:02:42.451 LIB libspdk_vfu_tgt.a 00:02:42.451 SYMLINK libspdk_init.so 00:02:42.451 SO libspdk_virtio.so.7.0 00:02:42.451 SO libspdk_vfu_tgt.so.3.0 00:02:42.451 SYMLINK libspdk_vfu_tgt.so 00:02:42.451 SYMLINK libspdk_virtio.so 00:02:42.451 CC lib/event/app.o 00:02:42.451 CC lib/event/reactor.o 00:02:42.451 CC lib/event/log_rpc.o 00:02:42.451 CC lib/event/app_rpc.o 00:02:42.451 CC lib/event/scheduler_static.o 00:02:43.018 LIB libspdk_event.a 00:02:43.018 SO libspdk_event.so.14.0 00:02:43.018 LIB libspdk_accel.a 00:02:43.018 SYMLINK libspdk_event.so 00:02:43.018 SO libspdk_accel.so.15.1 00:02:43.018 SYMLINK libspdk_accel.so 00:02:43.276 LIB libspdk_nvme.a 00:02:43.276 CC lib/bdev/bdev.o 00:02:43.276 CC lib/bdev/bdev_rpc.o 00:02:43.276 CC lib/bdev/bdev_zone.o 00:02:43.276 CC lib/bdev/part.o 00:02:43.276 CC lib/bdev/scsi_nvme.o 00:02:43.276 SO libspdk_nvme.so.13.1 00:02:43.841 SYMLINK libspdk_nvme.so 00:02:45.212 LIB libspdk_blob.a 00:02:45.212 SO libspdk_blob.so.11.0 00:02:45.212 SYMLINK libspdk_blob.so 00:02:45.469 CC lib/lvol/lvol.o 00:02:45.469 CC lib/blobfs/blobfs.o 00:02:45.469 CC lib/blobfs/tree.o 00:02:45.727 LIB libspdk_bdev.a 00:02:45.727 SO libspdk_bdev.so.15.1 00:02:45.992 SYMLINK libspdk_bdev.so 00:02:45.992 CC lib/nbd/nbd.o 00:02:45.992 CC lib/scsi/dev.o 00:02:45.992 CC lib/nbd/nbd_rpc.o 00:02:45.992 CC lib/scsi/lun.o 00:02:45.992 CC lib/ublk/ublk.o 00:02:45.992 CC lib/ublk/ublk_rpc.o 00:02:45.992 CC lib/scsi/port.o 00:02:45.992 CC lib/ftl/ftl_core.o 00:02:45.992 CC lib/nvmf/ctrlr.o 00:02:45.992 CC lib/nvmf/ctrlr_discovery.o 00:02:45.992 CC lib/ftl/ftl_init.o 00:02:45.992 CC lib/scsi/scsi.o 00:02:45.992 CC lib/nvmf/ctrlr_bdev.o 00:02:45.992 CC lib/scsi/scsi_bdev.o 00:02:45.992 CC lib/ftl/ftl_layout.o 00:02:45.992 CC lib/nvmf/subsystem.o 00:02:45.992 CC lib/scsi/scsi_pr.o 00:02:45.992 CC lib/ftl/ftl_debug.o 00:02:45.992 CC lib/nvmf/nvmf.o 00:02:45.992 CC lib/scsi/scsi_rpc.o 00:02:45.992 CC lib/scsi/task.o 00:02:45.992 CC lib/nvmf/nvmf_rpc.o 00:02:45.992 CC lib/ftl/ftl_io.o 00:02:45.992 CC lib/ftl/ftl_sb.o 00:02:45.992 CC lib/nvmf/transport.o 00:02:45.992 CC lib/ftl/ftl_l2p.o 00:02:45.992 CC lib/ftl/ftl_l2p_flat.o 00:02:45.992 CC lib/nvmf/tcp.o 00:02:45.992 CC lib/nvmf/stubs.o 00:02:45.992 CC lib/ftl/ftl_nv_cache.o 00:02:45.992 CC lib/nvmf/mdns_server.o 00:02:45.992 CC lib/ftl/ftl_band.o 00:02:45.992 CC lib/nvmf/vfio_user.o 00:02:45.992 CC lib/ftl/ftl_band_ops.o 00:02:45.992 CC lib/nvmf/rdma.o 00:02:45.992 CC lib/ftl/ftl_writer.o 00:02:45.992 CC lib/nvmf/auth.o 00:02:45.992 CC lib/ftl/ftl_rq.o 00:02:45.992 CC lib/ftl/ftl_reloc.o 00:02:45.992 CC lib/ftl/ftl_l2p_cache.o 00:02:45.992 CC lib/ftl/ftl_p2l.o 00:02:45.992 CC lib/ftl/mngt/ftl_mngt.o 00:02:45.992 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:45.992 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:45.992 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:45.992 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:46.255 LIB libspdk_blobfs.a 00:02:46.255 SO libspdk_blobfs.so.10.0 00:02:46.516 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:46.516 SYMLINK libspdk_blobfs.so 00:02:46.516 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:46.516 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:46.516 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:46.516 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:46.516 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:46.516 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:46.516 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:46.516 CC lib/ftl/utils/ftl_conf.o 00:02:46.516 CC lib/ftl/utils/ftl_md.o 00:02:46.516 CC lib/ftl/utils/ftl_mempool.o 00:02:46.516 LIB libspdk_lvol.a 00:02:46.516 CC lib/ftl/utils/ftl_bitmap.o 00:02:46.516 CC lib/ftl/utils/ftl_property.o 00:02:46.516 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:46.516 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:46.516 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:46.516 SO libspdk_lvol.so.10.0 00:02:46.516 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:46.516 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:46.516 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:46.776 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:46.776 SYMLINK libspdk_lvol.so 00:02:46.776 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:46.776 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:46.776 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:46.776 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:46.776 CC lib/ftl/base/ftl_base_dev.o 00:02:46.776 CC lib/ftl/base/ftl_base_bdev.o 00:02:46.776 CC lib/ftl/ftl_trace.o 00:02:46.776 LIB libspdk_nbd.a 00:02:47.035 SO libspdk_nbd.so.7.0 00:02:47.035 SYMLINK libspdk_nbd.so 00:02:47.035 LIB libspdk_scsi.a 00:02:47.035 SO libspdk_scsi.so.9.0 00:02:47.035 LIB libspdk_ublk.a 00:02:47.035 SO libspdk_ublk.so.3.0 00:02:47.035 SYMLINK libspdk_scsi.so 00:02:47.294 SYMLINK libspdk_ublk.so 00:02:47.294 CC lib/iscsi/conn.o 00:02:47.294 CC lib/vhost/vhost.o 00:02:47.294 CC lib/iscsi/init_grp.o 00:02:47.294 CC lib/iscsi/iscsi.o 00:02:47.294 CC lib/vhost/vhost_rpc.o 00:02:47.294 CC lib/iscsi/md5.o 00:02:47.294 CC lib/vhost/vhost_scsi.o 00:02:47.294 CC lib/iscsi/param.o 00:02:47.294 CC lib/vhost/vhost_blk.o 00:02:47.294 CC lib/iscsi/portal_grp.o 00:02:47.294 CC lib/vhost/rte_vhost_user.o 00:02:47.294 CC lib/iscsi/tgt_node.o 00:02:47.294 CC lib/iscsi/iscsi_subsystem.o 00:02:47.294 CC lib/iscsi/iscsi_rpc.o 00:02:47.294 CC lib/iscsi/task.o 00:02:47.552 LIB libspdk_ftl.a 00:02:47.552 SO libspdk_ftl.so.9.0 00:02:48.118 SYMLINK libspdk_ftl.so 00:02:48.684 LIB libspdk_vhost.a 00:02:48.684 SO libspdk_vhost.so.8.0 00:02:48.684 SYMLINK libspdk_vhost.so 00:02:48.684 LIB libspdk_nvmf.a 00:02:48.684 LIB libspdk_iscsi.a 00:02:48.684 SO libspdk_nvmf.so.18.1 00:02:48.684 SO libspdk_iscsi.so.8.0 00:02:48.942 SYMLINK libspdk_iscsi.so 00:02:48.942 SYMLINK libspdk_nvmf.so 00:02:49.200 CC module/env_dpdk/env_dpdk_rpc.o 00:02:49.200 CC module/vfu_device/vfu_virtio.o 00:02:49.200 CC module/vfu_device/vfu_virtio_blk.o 00:02:49.200 CC module/vfu_device/vfu_virtio_scsi.o 00:02:49.200 CC module/vfu_device/vfu_virtio_rpc.o 00:02:49.200 CC module/accel/error/accel_error.o 00:02:49.200 CC module/keyring/linux/keyring.o 00:02:49.200 CC module/accel/dsa/accel_dsa.o 00:02:49.200 CC module/blob/bdev/blob_bdev.o 00:02:49.200 CC module/sock/posix/posix.o 00:02:49.200 CC module/accel/error/accel_error_rpc.o 00:02:49.200 CC module/accel/iaa/accel_iaa.o 00:02:49.200 CC module/keyring/file/keyring.o 00:02:49.200 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:49.200 CC module/scheduler/gscheduler/gscheduler.o 00:02:49.200 CC module/accel/ioat/accel_ioat.o 00:02:49.200 CC module/accel/iaa/accel_iaa_rpc.o 00:02:49.200 CC module/keyring/file/keyring_rpc.o 00:02:49.200 CC module/keyring/linux/keyring_rpc.o 00:02:49.200 CC module/accel/ioat/accel_ioat_rpc.o 00:02:49.200 CC module/accel/dsa/accel_dsa_rpc.o 00:02:49.200 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:49.458 LIB libspdk_env_dpdk_rpc.a 00:02:49.458 SO libspdk_env_dpdk_rpc.so.6.0 00:02:49.458 SYMLINK libspdk_env_dpdk_rpc.so 00:02:49.458 LIB libspdk_keyring_linux.a 00:02:49.458 LIB libspdk_keyring_file.a 00:02:49.458 LIB libspdk_scheduler_gscheduler.a 00:02:49.458 LIB libspdk_scheduler_dpdk_governor.a 00:02:49.458 SO libspdk_keyring_file.so.1.0 00:02:49.458 SO libspdk_keyring_linux.so.1.0 00:02:49.458 SO libspdk_scheduler_gscheduler.so.4.0 00:02:49.458 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:49.458 LIB libspdk_accel_error.a 00:02:49.458 LIB libspdk_accel_ioat.a 00:02:49.458 LIB libspdk_scheduler_dynamic.a 00:02:49.458 LIB libspdk_accel_iaa.a 00:02:49.458 SO libspdk_accel_error.so.2.0 00:02:49.458 SO libspdk_accel_ioat.so.6.0 00:02:49.458 SYMLINK libspdk_scheduler_gscheduler.so 00:02:49.458 SO libspdk_scheduler_dynamic.so.4.0 00:02:49.717 SYMLINK libspdk_keyring_file.so 00:02:49.717 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:49.717 SYMLINK libspdk_keyring_linux.so 00:02:49.717 SO libspdk_accel_iaa.so.3.0 00:02:49.717 LIB libspdk_accel_dsa.a 00:02:49.717 SYMLINK libspdk_accel_error.so 00:02:49.717 SYMLINK libspdk_accel_ioat.so 00:02:49.717 SYMLINK libspdk_scheduler_dynamic.so 00:02:49.717 LIB libspdk_blob_bdev.a 00:02:49.717 SO libspdk_accel_dsa.so.5.0 00:02:49.717 SYMLINK libspdk_accel_iaa.so 00:02:49.717 SO libspdk_blob_bdev.so.11.0 00:02:49.717 SYMLINK libspdk_accel_dsa.so 00:02:49.717 SYMLINK libspdk_blob_bdev.so 00:02:49.976 LIB libspdk_vfu_device.a 00:02:49.976 SO libspdk_vfu_device.so.3.0 00:02:49.976 CC module/bdev/null/bdev_null.o 00:02:49.976 CC module/bdev/gpt/gpt.o 00:02:49.976 CC module/bdev/malloc/bdev_malloc.o 00:02:49.976 CC module/bdev/nvme/bdev_nvme.o 00:02:49.976 CC module/bdev/null/bdev_null_rpc.o 00:02:49.976 CC module/bdev/error/vbdev_error.o 00:02:49.976 CC module/bdev/gpt/vbdev_gpt.o 00:02:49.976 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:49.976 CC module/bdev/error/vbdev_error_rpc.o 00:02:49.976 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.976 CC module/bdev/delay/vbdev_delay.o 00:02:49.976 CC module/bdev/nvme/nvme_rpc.o 00:02:49.976 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:49.976 CC module/bdev/passthru/vbdev_passthru.o 00:02:49.976 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.976 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:49.976 CC module/bdev/split/vbdev_split.o 00:02:49.976 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.976 CC module/bdev/nvme/vbdev_opal.o 00:02:49.976 CC module/bdev/lvol/vbdev_lvol.o 00:02:49.976 CC module/bdev/ftl/bdev_ftl.o 00:02:49.976 CC module/blobfs/bdev/blobfs_bdev.o 00:02:49.976 CC module/bdev/aio/bdev_aio.o 00:02:49.976 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:49.976 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.976 CC module/bdev/raid/bdev_raid.o 00:02:49.976 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.976 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:49.976 CC module/bdev/split/vbdev_split_rpc.o 00:02:49.976 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.976 CC module/bdev/raid/bdev_raid_rpc.o 00:02:49.976 CC module/bdev/aio/bdev_aio_rpc.o 00:02:49.976 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.976 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.976 CC module/bdev/raid/raid0.o 00:02:49.976 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:49.976 CC module/bdev/raid/raid1.o 00:02:49.976 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.976 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.976 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.976 CC module/bdev/raid/concat.o 00:02:49.976 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:49.976 SYMLINK libspdk_vfu_device.so 00:02:50.234 LIB libspdk_sock_posix.a 00:02:50.234 SO libspdk_sock_posix.so.6.0 00:02:50.492 LIB libspdk_blobfs_bdev.a 00:02:50.492 SO libspdk_blobfs_bdev.so.6.0 00:02:50.492 LIB libspdk_bdev_aio.a 00:02:50.492 SYMLINK libspdk_sock_posix.so 00:02:50.492 LIB libspdk_bdev_error.a 00:02:50.492 LIB libspdk_bdev_split.a 00:02:50.492 SO libspdk_bdev_aio.so.6.0 00:02:50.492 SO libspdk_bdev_error.so.6.0 00:02:50.492 SYMLINK libspdk_blobfs_bdev.so 00:02:50.492 LIB libspdk_bdev_zone_block.a 00:02:50.492 SO libspdk_bdev_split.so.6.0 00:02:50.492 LIB libspdk_bdev_ftl.a 00:02:50.492 LIB libspdk_bdev_null.a 00:02:50.492 SO libspdk_bdev_zone_block.so.6.0 00:02:50.492 SYMLINK libspdk_bdev_aio.so 00:02:50.492 LIB libspdk_bdev_gpt.a 00:02:50.492 SO libspdk_bdev_ftl.so.6.0 00:02:50.492 SYMLINK libspdk_bdev_error.so 00:02:50.492 SO libspdk_bdev_null.so.6.0 00:02:50.492 LIB libspdk_bdev_passthru.a 00:02:50.492 SYMLINK libspdk_bdev_split.so 00:02:50.492 SO libspdk_bdev_gpt.so.6.0 00:02:50.492 LIB libspdk_bdev_delay.a 00:02:50.492 SO libspdk_bdev_passthru.so.6.0 00:02:50.492 SYMLINK libspdk_bdev_zone_block.so 00:02:50.492 LIB libspdk_bdev_iscsi.a 00:02:50.492 SYMLINK libspdk_bdev_ftl.so 00:02:50.492 LIB libspdk_bdev_malloc.a 00:02:50.492 SYMLINK libspdk_bdev_null.so 00:02:50.492 SO libspdk_bdev_delay.so.6.0 00:02:50.492 SYMLINK libspdk_bdev_gpt.so 00:02:50.492 SO libspdk_bdev_iscsi.so.6.0 00:02:50.492 SO libspdk_bdev_malloc.so.6.0 00:02:50.492 SYMLINK libspdk_bdev_passthru.so 00:02:50.749 SYMLINK libspdk_bdev_delay.so 00:02:50.749 SYMLINK libspdk_bdev_iscsi.so 00:02:50.749 SYMLINK libspdk_bdev_malloc.so 00:02:50.749 LIB libspdk_bdev_lvol.a 00:02:50.749 SO libspdk_bdev_lvol.so.6.0 00:02:50.749 LIB libspdk_bdev_virtio.a 00:02:50.749 SO libspdk_bdev_virtio.so.6.0 00:02:50.749 SYMLINK libspdk_bdev_lvol.so 00:02:50.749 SYMLINK libspdk_bdev_virtio.so 00:02:51.348 LIB libspdk_bdev_raid.a 00:02:51.348 SO libspdk_bdev_raid.so.6.0 00:02:51.348 SYMLINK libspdk_bdev_raid.so 00:02:52.284 LIB libspdk_bdev_nvme.a 00:02:52.284 SO libspdk_bdev_nvme.so.7.0 00:02:52.542 SYMLINK libspdk_bdev_nvme.so 00:02:52.800 CC module/event/subsystems/iobuf/iobuf.o 00:02:52.800 CC module/event/subsystems/vmd/vmd.o 00:02:52.800 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:52.800 CC module/event/subsystems/scheduler/scheduler.o 00:02:52.800 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:52.800 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:52.800 CC module/event/subsystems/sock/sock.o 00:02:52.800 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:52.800 CC module/event/subsystems/keyring/keyring.o 00:02:53.058 LIB libspdk_event_keyring.a 00:02:53.058 LIB libspdk_event_vhost_blk.a 00:02:53.058 LIB libspdk_event_scheduler.a 00:02:53.058 LIB libspdk_event_vfu_tgt.a 00:02:53.058 LIB libspdk_event_vmd.a 00:02:53.058 LIB libspdk_event_sock.a 00:02:53.058 SO libspdk_event_keyring.so.1.0 00:02:53.058 LIB libspdk_event_iobuf.a 00:02:53.058 SO libspdk_event_vhost_blk.so.3.0 00:02:53.058 SO libspdk_event_scheduler.so.4.0 00:02:53.058 SO libspdk_event_vfu_tgt.so.3.0 00:02:53.058 SO libspdk_event_sock.so.5.0 00:02:53.058 SO libspdk_event_vmd.so.6.0 00:02:53.058 SO libspdk_event_iobuf.so.3.0 00:02:53.058 SYMLINK libspdk_event_keyring.so 00:02:53.058 SYMLINK libspdk_event_vhost_blk.so 00:02:53.058 SYMLINK libspdk_event_scheduler.so 00:02:53.058 SYMLINK libspdk_event_vfu_tgt.so 00:02:53.058 SYMLINK libspdk_event_sock.so 00:02:53.058 SYMLINK libspdk_event_vmd.so 00:02:53.058 SYMLINK libspdk_event_iobuf.so 00:02:53.316 CC module/event/subsystems/accel/accel.o 00:02:53.316 LIB libspdk_event_accel.a 00:02:53.316 SO libspdk_event_accel.so.6.0 00:02:53.575 SYMLINK libspdk_event_accel.so 00:02:53.575 CC module/event/subsystems/bdev/bdev.o 00:02:53.832 LIB libspdk_event_bdev.a 00:02:53.832 SO libspdk_event_bdev.so.6.0 00:02:53.832 SYMLINK libspdk_event_bdev.so 00:02:54.090 CC module/event/subsystems/ublk/ublk.o 00:02:54.090 CC module/event/subsystems/nbd/nbd.o 00:02:54.090 CC module/event/subsystems/scsi/scsi.o 00:02:54.090 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:54.090 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:54.090 LIB libspdk_event_nbd.a 00:02:54.090 LIB libspdk_event_ublk.a 00:02:54.090 LIB libspdk_event_scsi.a 00:02:54.090 SO libspdk_event_nbd.so.6.0 00:02:54.090 SO libspdk_event_ublk.so.3.0 00:02:54.347 SO libspdk_event_scsi.so.6.0 00:02:54.347 SYMLINK libspdk_event_ublk.so 00:02:54.347 SYMLINK libspdk_event_nbd.so 00:02:54.347 SYMLINK libspdk_event_scsi.so 00:02:54.347 LIB libspdk_event_nvmf.a 00:02:54.347 SO libspdk_event_nvmf.so.6.0 00:02:54.347 SYMLINK libspdk_event_nvmf.so 00:02:54.347 CC module/event/subsystems/iscsi/iscsi.o 00:02:54.347 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:54.604 LIB libspdk_event_vhost_scsi.a 00:02:54.604 LIB libspdk_event_iscsi.a 00:02:54.604 SO libspdk_event_vhost_scsi.so.3.0 00:02:54.604 SO libspdk_event_iscsi.so.6.0 00:02:54.604 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.604 SYMLINK libspdk_event_iscsi.so 00:02:54.862 SO libspdk.so.6.0 00:02:54.862 SYMLINK libspdk.so 00:02:54.862 CXX app/trace/trace.o 00:02:54.862 CC app/trace_record/trace_record.o 00:02:54.862 CC app/spdk_nvme_perf/perf.o 00:02:54.862 CC app/spdk_nvme_identify/identify.o 00:02:54.862 CC app/spdk_nvme_discover/discovery_aer.o 00:02:54.862 TEST_HEADER include/spdk/accel.h 00:02:54.862 CC app/spdk_top/spdk_top.o 00:02:54.862 CC test/rpc_client/rpc_client_test.o 00:02:54.862 CC app/spdk_lspci/spdk_lspci.o 00:02:54.862 TEST_HEADER include/spdk/accel_module.h 00:02:54.862 TEST_HEADER include/spdk/assert.h 00:02:54.862 TEST_HEADER include/spdk/barrier.h 00:02:54.862 TEST_HEADER include/spdk/base64.h 00:02:54.862 TEST_HEADER include/spdk/bdev.h 00:02:54.862 TEST_HEADER include/spdk/bdev_module.h 00:02:54.862 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.126 TEST_HEADER include/spdk/bit_array.h 00:02:55.126 TEST_HEADER include/spdk/bit_pool.h 00:02:55.126 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.126 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.126 TEST_HEADER include/spdk/blobfs.h 00:02:55.126 TEST_HEADER include/spdk/blob.h 00:02:55.126 TEST_HEADER include/spdk/conf.h 00:02:55.126 TEST_HEADER include/spdk/config.h 00:02:55.126 TEST_HEADER include/spdk/cpuset.h 00:02:55.126 TEST_HEADER include/spdk/crc16.h 00:02:55.126 TEST_HEADER include/spdk/crc32.h 00:02:55.126 TEST_HEADER include/spdk/crc64.h 00:02:55.126 TEST_HEADER include/spdk/dif.h 00:02:55.126 TEST_HEADER include/spdk/dma.h 00:02:55.126 TEST_HEADER include/spdk/endian.h 00:02:55.126 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.126 TEST_HEADER include/spdk/event.h 00:02:55.126 TEST_HEADER include/spdk/env.h 00:02:55.126 TEST_HEADER include/spdk/fd_group.h 00:02:55.126 TEST_HEADER include/spdk/file.h 00:02:55.126 TEST_HEADER include/spdk/fd.h 00:02:55.126 TEST_HEADER include/spdk/ftl.h 00:02:55.126 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.126 TEST_HEADER include/spdk/hexlify.h 00:02:55.126 TEST_HEADER include/spdk/histogram_data.h 00:02:55.126 TEST_HEADER include/spdk/idxd.h 00:02:55.126 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.126 TEST_HEADER include/spdk/ioat.h 00:02:55.126 TEST_HEADER include/spdk/init.h 00:02:55.126 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.126 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.126 TEST_HEADER include/spdk/json.h 00:02:55.126 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.126 TEST_HEADER include/spdk/keyring.h 00:02:55.126 TEST_HEADER include/spdk/keyring_module.h 00:02:55.126 TEST_HEADER include/spdk/likely.h 00:02:55.126 TEST_HEADER include/spdk/log.h 00:02:55.126 TEST_HEADER include/spdk/lvol.h 00:02:55.126 TEST_HEADER include/spdk/memory.h 00:02:55.126 TEST_HEADER include/spdk/mmio.h 00:02:55.126 TEST_HEADER include/spdk/nbd.h 00:02:55.126 TEST_HEADER include/spdk/notify.h 00:02:55.126 TEST_HEADER include/spdk/nvme.h 00:02:55.126 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.126 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.126 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.126 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.126 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.126 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.126 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.126 TEST_HEADER include/spdk/nvmf.h 00:02:55.126 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.126 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.126 TEST_HEADER include/spdk/opal.h 00:02:55.126 TEST_HEADER include/spdk/opal_spec.h 00:02:55.126 TEST_HEADER include/spdk/pci_ids.h 00:02:55.126 TEST_HEADER include/spdk/pipe.h 00:02:55.126 TEST_HEADER include/spdk/queue.h 00:02:55.126 TEST_HEADER include/spdk/reduce.h 00:02:55.126 TEST_HEADER include/spdk/rpc.h 00:02:55.126 TEST_HEADER include/spdk/scheduler.h 00:02:55.126 TEST_HEADER include/spdk/scsi.h 00:02:55.126 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.126 TEST_HEADER include/spdk/sock.h 00:02:55.126 TEST_HEADER include/spdk/string.h 00:02:55.126 TEST_HEADER include/spdk/stdinc.h 00:02:55.126 TEST_HEADER include/spdk/thread.h 00:02:55.126 TEST_HEADER include/spdk/trace.h 00:02:55.126 TEST_HEADER include/spdk/trace_parser.h 00:02:55.126 TEST_HEADER include/spdk/tree.h 00:02:55.126 TEST_HEADER include/spdk/ublk.h 00:02:55.126 TEST_HEADER include/spdk/util.h 00:02:55.126 TEST_HEADER include/spdk/uuid.h 00:02:55.126 TEST_HEADER include/spdk/version.h 00:02:55.126 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.126 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.126 TEST_HEADER include/spdk/vhost.h 00:02:55.126 TEST_HEADER include/spdk/vmd.h 00:02:55.126 TEST_HEADER include/spdk/xor.h 00:02:55.126 TEST_HEADER include/spdk/zipf.h 00:02:55.126 CXX test/cpp_headers/accel.o 00:02:55.126 CXX test/cpp_headers/accel_module.o 00:02:55.126 CXX test/cpp_headers/assert.o 00:02:55.126 CXX test/cpp_headers/barrier.o 00:02:55.126 CXX test/cpp_headers/base64.o 00:02:55.126 CXX test/cpp_headers/bdev.o 00:02:55.126 CXX test/cpp_headers/bdev_module.o 00:02:55.126 CXX test/cpp_headers/bdev_zone.o 00:02:55.126 CXX test/cpp_headers/bit_array.o 00:02:55.126 CXX test/cpp_headers/bit_pool.o 00:02:55.126 CXX test/cpp_headers/blob_bdev.o 00:02:55.126 CXX test/cpp_headers/blobfs_bdev.o 00:02:55.126 CXX test/cpp_headers/blobfs.o 00:02:55.126 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:55.126 CXX test/cpp_headers/blob.o 00:02:55.126 CXX test/cpp_headers/conf.o 00:02:55.126 CXX test/cpp_headers/config.o 00:02:55.126 CXX test/cpp_headers/cpuset.o 00:02:55.126 CXX test/cpp_headers/crc16.o 00:02:55.126 CC app/nvmf_tgt/nvmf_main.o 00:02:55.126 CC app/spdk_dd/spdk_dd.o 00:02:55.126 CC app/iscsi_tgt/iscsi_tgt.o 00:02:55.126 CC app/spdk_tgt/spdk_tgt.o 00:02:55.126 CXX test/cpp_headers/crc32.o 00:02:55.126 CC examples/ioat/verify/verify.o 00:02:55.126 CC examples/ioat/perf/perf.o 00:02:55.126 CC test/thread/poller_perf/poller_perf.o 00:02:55.126 CC test/env/vtophys/vtophys.o 00:02:55.126 CC examples/util/zipf/zipf.o 00:02:55.126 CC app/fio/nvme/fio_plugin.o 00:02:55.126 CC test/app/histogram_perf/histogram_perf.o 00:02:55.126 CC test/env/pci/pci_ut.o 00:02:55.126 CC test/app/stub/stub.o 00:02:55.126 CC test/env/memory/memory_ut.o 00:02:55.126 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:55.126 CC test/app/jsoncat/jsoncat.o 00:02:55.126 CC app/fio/bdev/fio_plugin.o 00:02:55.126 CC test/dma/test_dma/test_dma.o 00:02:55.126 CC test/app/bdev_svc/bdev_svc.o 00:02:55.389 CC test/env/mem_callbacks/mem_callbacks.o 00:02:55.389 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.389 LINK spdk_lspci 00:02:55.389 LINK rpc_client_test 00:02:55.389 LINK spdk_nvme_discover 00:02:55.389 LINK vtophys 00:02:55.389 LINK poller_perf 00:02:55.389 LINK jsoncat 00:02:55.389 LINK zipf 00:02:55.389 LINK histogram_perf 00:02:55.389 CXX test/cpp_headers/crc64.o 00:02:55.389 CXX test/cpp_headers/dif.o 00:02:55.389 LINK interrupt_tgt 00:02:55.389 CXX test/cpp_headers/dma.o 00:02:55.389 LINK nvmf_tgt 00:02:55.389 CXX test/cpp_headers/endian.o 00:02:55.656 CXX test/cpp_headers/env_dpdk.o 00:02:55.656 LINK env_dpdk_post_init 00:02:55.656 LINK spdk_trace_record 00:02:55.656 CXX test/cpp_headers/env.o 00:02:55.656 CXX test/cpp_headers/event.o 00:02:55.656 CXX test/cpp_headers/fd_group.o 00:02:55.656 CXX test/cpp_headers/fd.o 00:02:55.656 CXX test/cpp_headers/file.o 00:02:55.656 CXX test/cpp_headers/ftl.o 00:02:55.656 LINK stub 00:02:55.656 CXX test/cpp_headers/gpt_spec.o 00:02:55.656 CXX test/cpp_headers/hexlify.o 00:02:55.656 LINK iscsi_tgt 00:02:55.656 CXX test/cpp_headers/histogram_data.o 00:02:55.656 CXX test/cpp_headers/idxd.o 00:02:55.656 LINK spdk_tgt 00:02:55.656 LINK verify 00:02:55.656 CXX test/cpp_headers/idxd_spec.o 00:02:55.656 LINK bdev_svc 00:02:55.656 LINK ioat_perf 00:02:55.656 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.656 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:55.656 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:55.656 CXX test/cpp_headers/init.o 00:02:55.656 CXX test/cpp_headers/ioat.o 00:02:55.656 CXX test/cpp_headers/ioat_spec.o 00:02:55.915 CXX test/cpp_headers/iscsi_spec.o 00:02:55.915 LINK spdk_dd 00:02:55.915 LINK spdk_trace 00:02:55.915 CXX test/cpp_headers/json.o 00:02:55.915 CXX test/cpp_headers/jsonrpc.o 00:02:55.915 CXX test/cpp_headers/keyring.o 00:02:55.915 CXX test/cpp_headers/keyring_module.o 00:02:55.915 CXX test/cpp_headers/likely.o 00:02:55.915 CXX test/cpp_headers/log.o 00:02:55.915 CXX test/cpp_headers/lvol.o 00:02:55.915 CXX test/cpp_headers/memory.o 00:02:55.915 CXX test/cpp_headers/mmio.o 00:02:55.915 CXX test/cpp_headers/nbd.o 00:02:55.915 CXX test/cpp_headers/notify.o 00:02:55.915 LINK pci_ut 00:02:55.915 CXX test/cpp_headers/nvme.o 00:02:55.915 CXX test/cpp_headers/nvme_intel.o 00:02:55.915 CXX test/cpp_headers/nvme_ocssd.o 00:02:55.915 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:55.915 LINK test_dma 00:02:55.915 CXX test/cpp_headers/nvme_spec.o 00:02:55.915 CXX test/cpp_headers/nvme_zns.o 00:02:55.915 CXX test/cpp_headers/nvmf_cmd.o 00:02:55.915 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:55.915 CXX test/cpp_headers/nvmf.o 00:02:56.178 CXX test/cpp_headers/nvmf_spec.o 00:02:56.178 CXX test/cpp_headers/nvmf_transport.o 00:02:56.178 CC test/event/event_perf/event_perf.o 00:02:56.178 CC test/event/reactor/reactor.o 00:02:56.178 CC test/event/reactor_perf/reactor_perf.o 00:02:56.178 CC examples/sock/hello_world/hello_sock.o 00:02:56.178 CXX test/cpp_headers/opal.o 00:02:56.178 CC examples/vmd/led/led.o 00:02:56.178 CC test/event/app_repeat/app_repeat.o 00:02:56.178 CXX test/cpp_headers/opal_spec.o 00:02:56.178 CC examples/vmd/lsvmd/lsvmd.o 00:02:56.178 CXX test/cpp_headers/pci_ids.o 00:02:56.178 CC examples/idxd/perf/perf.o 00:02:56.178 LINK nvme_fuzz 00:02:56.178 CC examples/thread/thread/thread_ex.o 00:02:56.178 CC test/event/scheduler/scheduler.o 00:02:56.178 CXX test/cpp_headers/pipe.o 00:02:56.178 LINK spdk_nvme 00:02:56.178 LINK spdk_bdev 00:02:56.437 CXX test/cpp_headers/queue.o 00:02:56.437 CXX test/cpp_headers/reduce.o 00:02:56.437 CXX test/cpp_headers/rpc.o 00:02:56.437 CXX test/cpp_headers/scheduler.o 00:02:56.437 CXX test/cpp_headers/scsi.o 00:02:56.437 CXX test/cpp_headers/scsi_spec.o 00:02:56.437 CXX test/cpp_headers/sock.o 00:02:56.437 CXX test/cpp_headers/stdinc.o 00:02:56.437 CXX test/cpp_headers/string.o 00:02:56.437 CXX test/cpp_headers/thread.o 00:02:56.437 CXX test/cpp_headers/trace.o 00:02:56.437 CXX test/cpp_headers/trace_parser.o 00:02:56.437 LINK event_perf 00:02:56.437 LINK reactor 00:02:56.437 CXX test/cpp_headers/tree.o 00:02:56.437 CXX test/cpp_headers/ublk.o 00:02:56.437 CXX test/cpp_headers/util.o 00:02:56.437 LINK reactor_perf 00:02:56.437 CXX test/cpp_headers/uuid.o 00:02:56.437 CXX test/cpp_headers/version.o 00:02:56.437 CXX test/cpp_headers/vfio_user_pci.o 00:02:56.437 LINK lsvmd 00:02:56.437 LINK led 00:02:56.437 CXX test/cpp_headers/vfio_user_spec.o 00:02:56.437 CXX test/cpp_headers/vhost.o 00:02:56.437 LINK vhost_fuzz 00:02:56.437 LINK mem_callbacks 00:02:56.437 LINK app_repeat 00:02:56.437 CXX test/cpp_headers/vmd.o 00:02:56.437 CXX test/cpp_headers/xor.o 00:02:56.437 CXX test/cpp_headers/zipf.o 00:02:56.437 LINK spdk_nvme_perf 00:02:56.437 CC app/vhost/vhost.o 00:02:56.437 LINK spdk_nvme_identify 00:02:56.695 LINK hello_sock 00:02:56.695 LINK spdk_top 00:02:56.695 LINK thread 00:02:56.695 LINK scheduler 00:02:56.695 CC test/nvme/overhead/overhead.o 00:02:56.695 CC test/nvme/aer/aer.o 00:02:56.695 CC test/nvme/startup/startup.o 00:02:56.695 CC test/nvme/e2edp/nvme_dp.o 00:02:56.695 CC test/nvme/err_injection/err_injection.o 00:02:56.695 CC test/nvme/sgl/sgl.o 00:02:56.695 CC test/nvme/reset/reset.o 00:02:56.953 CC test/nvme/reserve/reserve.o 00:02:56.953 LINK idxd_perf 00:02:56.953 CC test/blobfs/mkfs/mkfs.o 00:02:56.953 CC test/accel/dif/dif.o 00:02:56.953 CC test/nvme/simple_copy/simple_copy.o 00:02:56.953 CC test/nvme/connect_stress/connect_stress.o 00:02:56.953 CC test/nvme/boot_partition/boot_partition.o 00:02:56.953 CC test/nvme/fused_ordering/fused_ordering.o 00:02:56.953 CC test/nvme/compliance/nvme_compliance.o 00:02:56.953 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:56.953 CC test/nvme/fdp/fdp.o 00:02:56.953 CC test/nvme/cuse/cuse.o 00:02:56.953 LINK vhost 00:02:56.953 CC test/lvol/esnap/esnap.o 00:02:57.211 LINK boot_partition 00:02:57.211 LINK reserve 00:02:57.211 CC examples/nvme/hello_world/hello_world.o 00:02:57.211 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:57.211 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.211 CC examples/nvme/reconnect/reconnect.o 00:02:57.211 LINK mkfs 00:02:57.211 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.211 LINK connect_stress 00:02:57.211 LINK doorbell_aers 00:02:57.211 CC examples/nvme/hotplug/hotplug.o 00:02:57.211 LINK startup 00:02:57.211 CC examples/nvme/abort/abort.o 00:02:57.211 CC examples/nvme/arbitration/arbitration.o 00:02:57.211 LINK simple_copy 00:02:57.211 LINK fused_ordering 00:02:57.211 LINK nvme_dp 00:02:57.211 LINK err_injection 00:02:57.211 LINK reset 00:02:57.211 LINK overhead 00:02:57.211 LINK memory_ut 00:02:57.211 LINK sgl 00:02:57.211 LINK aer 00:02:57.211 LINK nvme_compliance 00:02:57.211 CC examples/accel/perf/accel_perf.o 00:02:57.468 LINK pmr_persistence 00:02:57.468 CC examples/blob/cli/blobcli.o 00:02:57.468 LINK cmb_copy 00:02:57.468 LINK fdp 00:02:57.468 CC examples/blob/hello_world/hello_blob.o 00:02:57.468 LINK dif 00:02:57.468 LINK hello_world 00:02:57.468 LINK hotplug 00:02:57.468 LINK arbitration 00:02:57.468 LINK reconnect 00:02:57.725 LINK abort 00:02:57.725 LINK hello_blob 00:02:57.725 LINK nvme_manage 00:02:57.725 LINK accel_perf 00:02:57.725 CC test/bdev/bdevio/bdevio.o 00:02:57.982 LINK blobcli 00:02:57.982 LINK iscsi_fuzz 00:02:58.240 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.240 CC examples/bdev/bdevperf/bdevperf.o 00:02:58.240 LINK bdevio 00:02:58.497 LINK hello_bdev 00:02:58.497 LINK cuse 00:02:59.063 LINK bdevperf 00:02:59.321 CC examples/nvmf/nvmf/nvmf.o 00:02:59.579 LINK nvmf 00:03:02.114 LINK esnap 00:03:02.114 00:03:02.114 real 0m41.532s 00:03:02.114 user 7m25.412s 00:03:02.114 sys 1m50.431s 00:03:02.114 06:50:31 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:02.114 06:50:31 make -- common/autotest_common.sh@10 -- $ set +x 00:03:02.114 ************************************ 00:03:02.114 END TEST make 00:03:02.114 ************************************ 00:03:02.114 06:50:31 -- common/autotest_common.sh@1142 -- $ return 0 00:03:02.114 06:50:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:02.114 06:50:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:02.114 06:50:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:02.114 06:50:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.114 06:50:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:02.114 06:50:31 -- pm/common@44 -- $ pid=1278985 00:03:02.114 06:50:31 -- pm/common@50 -- $ kill -TERM 1278985 00:03:02.114 06:50:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.114 06:50:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:02.114 06:50:31 -- pm/common@44 -- $ pid=1278987 00:03:02.114 06:50:31 -- pm/common@50 -- $ kill -TERM 1278987 00:03:02.114 06:50:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.114 06:50:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:02.114 06:50:31 -- pm/common@44 -- $ pid=1278989 00:03:02.114 06:50:31 -- pm/common@50 -- $ kill -TERM 1278989 00:03:02.114 06:50:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.114 06:50:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:02.114 06:50:31 -- pm/common@44 -- $ pid=1279021 00:03:02.114 06:50:31 -- pm/common@50 -- $ sudo -E kill -TERM 1279021 00:03:02.114 06:50:31 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:02.114 06:50:31 -- nvmf/common.sh@7 -- # uname -s 00:03:02.114 06:50:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:02.114 06:50:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:02.114 06:50:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:02.114 06:50:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:02.114 06:50:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:02.114 06:50:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:02.114 06:50:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:02.114 06:50:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:02.114 06:50:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:02.114 06:50:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:02.114 06:50:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:02.114 06:50:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:02.114 06:50:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:02.114 06:50:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:02.114 06:50:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:02.114 06:50:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:02.114 06:50:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:02.114 06:50:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:02.114 06:50:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.114 06:50:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.114 06:50:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.114 06:50:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.114 06:50:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.114 06:50:31 -- paths/export.sh@5 -- # export PATH 00:03:02.114 06:50:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.114 06:50:31 -- nvmf/common.sh@47 -- # : 0 00:03:02.114 06:50:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:02.114 06:50:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:02.114 06:50:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:02.114 06:50:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:02.114 06:50:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:02.114 06:50:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:02.114 06:50:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:02.114 06:50:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:02.114 06:50:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:02.114 06:50:31 -- spdk/autotest.sh@32 -- # uname -s 00:03:02.114 06:50:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:02.114 06:50:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:02.114 06:50:31 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:02.115 06:50:31 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:02.115 06:50:31 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:02.115 06:50:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:02.115 06:50:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:02.115 06:50:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:02.115 06:50:31 -- spdk/autotest.sh@48 -- # udevadm_pid=1350318 00:03:02.115 06:50:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:02.115 06:50:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:02.115 06:50:31 -- pm/common@17 -- # local monitor 00:03:02.115 06:50:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.115 06:50:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.115 06:50:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.115 06:50:31 -- pm/common@21 -- # date +%s 00:03:02.115 06:50:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.115 06:50:31 -- pm/common@21 -- # date +%s 00:03:02.115 06:50:31 -- pm/common@25 -- # sleep 1 00:03:02.115 06:50:31 -- pm/common@21 -- # date +%s 00:03:02.115 06:50:31 -- pm/common@21 -- # date +%s 00:03:02.115 06:50:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720846231 00:03:02.115 06:50:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720846231 00:03:02.115 06:50:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720846231 00:03:02.115 06:50:31 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720846231 00:03:02.373 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720846231_collect-vmstat.pm.log 00:03:02.373 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720846231_collect-cpu-load.pm.log 00:03:02.373 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720846231_collect-cpu-temp.pm.log 00:03:02.373 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720846231_collect-bmc-pm.bmc.pm.log 00:03:03.308 06:50:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:03.308 06:50:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:03.308 06:50:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:03.308 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:03:03.308 06:50:32 -- spdk/autotest.sh@59 -- # create_test_list 00:03:03.308 06:50:32 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:03.308 06:50:32 -- common/autotest_common.sh@10 -- # set +x 00:03:03.308 06:50:32 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:03.308 06:50:32 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.308 06:50:32 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.308 06:50:32 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:03.308 06:50:32 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.308 06:50:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:03.308 06:50:32 -- common/autotest_common.sh@1455 -- # uname 00:03:03.308 06:50:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:03.308 06:50:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:03.308 06:50:32 -- common/autotest_common.sh@1475 -- # uname 00:03:03.308 06:50:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:03.308 06:50:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:03.308 06:50:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:03.308 06:50:32 -- spdk/autotest.sh@72 -- # hash lcov 00:03:03.308 06:50:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:03.308 06:50:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:03.308 --rc lcov_branch_coverage=1 00:03:03.308 --rc lcov_function_coverage=1 00:03:03.308 --rc genhtml_branch_coverage=1 00:03:03.308 --rc genhtml_function_coverage=1 00:03:03.308 --rc genhtml_legend=1 00:03:03.308 --rc geninfo_all_blocks=1 00:03:03.308 ' 00:03:03.308 06:50:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:03.308 --rc lcov_branch_coverage=1 00:03:03.308 --rc lcov_function_coverage=1 00:03:03.308 --rc genhtml_branch_coverage=1 00:03:03.308 --rc genhtml_function_coverage=1 00:03:03.308 --rc genhtml_legend=1 00:03:03.308 --rc geninfo_all_blocks=1 00:03:03.308 ' 00:03:03.308 06:50:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:03.308 --rc lcov_branch_coverage=1 00:03:03.308 --rc lcov_function_coverage=1 00:03:03.308 --rc genhtml_branch_coverage=1 00:03:03.308 --rc genhtml_function_coverage=1 00:03:03.308 --rc genhtml_legend=1 00:03:03.308 --rc geninfo_all_blocks=1 00:03:03.308 --no-external' 00:03:03.308 06:50:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:03.308 --rc lcov_branch_coverage=1 00:03:03.308 --rc lcov_function_coverage=1 00:03:03.308 --rc genhtml_branch_coverage=1 00:03:03.308 --rc genhtml_function_coverage=1 00:03:03.308 --rc genhtml_legend=1 00:03:03.308 --rc geninfo_all_blocks=1 00:03:03.308 --no-external' 00:03:03.308 06:50:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:03.308 lcov: LCOV version 1.14 00:03:03.308 06:50:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:08.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:08.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:08.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:08.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:08.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:08.888 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:08.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:08.888 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:08.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:08.888 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:08.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:08.888 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:08.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:08.888 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:08.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:08.889 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:35.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:35.417 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:41.972 06:51:10 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:41.972 06:51:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:41.972 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:03:41.972 06:51:10 -- spdk/autotest.sh@91 -- # rm -f 00:03:41.972 06:51:10 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.232 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:42.232 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:42.232 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:42.232 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:42.232 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:42.232 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:42.232 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:42.232 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:42.232 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:42.232 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:42.232 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:42.232 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:42.232 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:42.232 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:42.232 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:42.232 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:42.232 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:42.491 06:51:11 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:42.491 06:51:11 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:42.491 06:51:11 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:42.491 06:51:11 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:42.491 06:51:11 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:42.491 06:51:11 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:42.491 06:51:11 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:42.491 06:51:11 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.491 06:51:11 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:42.491 06:51:11 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:42.491 06:51:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.491 06:51:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:42.491 06:51:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:42.491 06:51:11 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:42.491 06:51:11 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:42.491 No valid GPT data, bailing 00:03:42.491 06:51:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.491 06:51:11 -- scripts/common.sh@391 -- # pt= 00:03:42.491 06:51:11 -- scripts/common.sh@392 -- # return 1 00:03:42.491 06:51:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:42.491 1+0 records in 00:03:42.491 1+0 records out 00:03:42.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00230774 s, 454 MB/s 00:03:42.491 06:51:11 -- spdk/autotest.sh@118 -- # sync 00:03:42.491 06:51:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.491 06:51:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.491 06:51:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:44.390 06:51:13 -- spdk/autotest.sh@124 -- # uname -s 00:03:44.390 06:51:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:44.390 06:51:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:44.390 06:51:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.390 06:51:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.390 06:51:13 -- common/autotest_common.sh@10 -- # set +x 00:03:44.390 ************************************ 00:03:44.390 START TEST setup.sh 00:03:44.390 ************************************ 00:03:44.390 06:51:13 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:44.390 * Looking for test storage... 00:03:44.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:44.390 06:51:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:44.390 06:51:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:44.390 06:51:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:44.390 06:51:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.390 06:51:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.390 06:51:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.390 ************************************ 00:03:44.390 START TEST acl 00:03:44.390 ************************************ 00:03:44.390 06:51:13 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:44.390 * Looking for test storage... 00:03:44.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:44.390 06:51:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:44.390 06:51:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:44.390 06:51:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:44.390 06:51:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:44.391 06:51:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.391 06:51:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:44.391 06:51:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:44.391 06:51:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.391 06:51:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.391 06:51:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:44.391 06:51:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:44.391 06:51:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:44.391 06:51:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:44.391 06:51:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:44.391 06:51:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.391 06:51:13 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.764 06:51:15 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:45.764 06:51:15 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:45.764 06:51:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.764 06:51:15 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:45.764 06:51:15 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.764 06:51:15 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:47.144 Hugepages 00:03:47.144 node hugesize free / total 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.144 00:03:47.144 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.144 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:47.145 06:51:16 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:47.145 06:51:16 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.145 06:51:16 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.145 06:51:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:47.145 ************************************ 00:03:47.145 START TEST denied 00:03:47.145 ************************************ 00:03:47.145 06:51:16 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:47.145 06:51:16 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:47.145 06:51:16 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:47.145 06:51:16 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:47.145 06:51:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.145 06:51:16 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.520 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:48.520 06:51:17 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:48.520 06:51:17 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:48.520 06:51:17 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:48.520 06:51:17 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:48.520 06:51:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:48.520 06:51:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:48.520 06:51:17 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:48.520 06:51:17 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:48.520 06:51:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.520 06:51:17 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.050 00:03:51.050 real 0m3.741s 00:03:51.050 user 0m1.067s 00:03:51.050 sys 0m1.759s 00:03:51.050 06:51:20 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.050 06:51:20 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:51.050 ************************************ 00:03:51.050 END TEST denied 00:03:51.050 ************************************ 00:03:51.050 06:51:20 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:51.050 06:51:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:51.050 06:51:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.050 06:51:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.050 06:51:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.050 ************************************ 00:03:51.050 START TEST allowed 00:03:51.050 ************************************ 00:03:51.050 06:51:20 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:51.050 06:51:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:51.050 06:51:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:51.050 06:51:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:51.050 06:51:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.050 06:51:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.583 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.583 06:51:22 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:53.583 06:51:22 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:53.583 06:51:22 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:53.583 06:51:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.583 06:51:22 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.516 00:03:54.516 real 0m3.840s 00:03:54.516 user 0m0.993s 00:03:54.516 sys 0m1.694s 00:03:54.516 06:51:23 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.516 06:51:23 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:54.516 ************************************ 00:03:54.516 END TEST allowed 00:03:54.516 ************************************ 00:03:54.775 06:51:23 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:54.775 00:03:54.775 real 0m10.263s 00:03:54.775 user 0m3.125s 00:03:54.775 sys 0m5.143s 00:03:54.775 06:51:23 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.775 06:51:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.775 ************************************ 00:03:54.775 END TEST acl 00:03:54.775 ************************************ 00:03:54.775 06:51:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:54.775 06:51:24 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:54.775 06:51:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.775 06:51:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.775 06:51:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:54.775 ************************************ 00:03:54.775 START TEST hugepages 00:03:54.775 ************************************ 00:03:54.775 06:51:24 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:54.775 * Looking for test storage... 00:03:54.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42282192 kB' 'MemAvailable: 45789004 kB' 'Buffers: 2704 kB' 'Cached: 11704416 kB' 'SwapCached: 0 kB' 'Active: 8701704 kB' 'Inactive: 3506552 kB' 'Active(anon): 8307352 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504384 kB' 'Mapped: 168592 kB' 'Shmem: 7806216 kB' 'KReclaimable: 199064 kB' 'Slab: 571028 kB' 'SReclaimable: 199064 kB' 'SUnreclaim: 371964 kB' 'KernelStack: 12912 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 9428500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.775 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.776 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.777 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.778 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.778 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.778 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.778 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.778 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:54.778 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:54.778 06:51:24 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:54.778 06:51:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.778 06:51:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.778 06:51:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.778 ************************************ 00:03:54.778 START TEST default_setup 00:03:54.778 ************************************ 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.778 06:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.152 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.153 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.153 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.153 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.153 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.153 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.153 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.153 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.153 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.153 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.153 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.153 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.153 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.153 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.153 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.153 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:57.095 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44389640 kB' 'MemAvailable: 47896468 kB' 'Buffers: 2704 kB' 'Cached: 11704508 kB' 'SwapCached: 0 kB' 'Active: 8720284 kB' 'Inactive: 3506552 kB' 'Active(anon): 8325932 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522900 kB' 'Mapped: 168740 kB' 'Shmem: 7806308 kB' 'KReclaimable: 199096 kB' 'Slab: 570812 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 371716 kB' 'KernelStack: 12752 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9449160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.095 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.096 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44390168 kB' 'MemAvailable: 47896996 kB' 'Buffers: 2704 kB' 'Cached: 11704508 kB' 'SwapCached: 0 kB' 'Active: 8719488 kB' 'Inactive: 3506552 kB' 'Active(anon): 8325136 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522112 kB' 'Mapped: 168704 kB' 'Shmem: 7806308 kB' 'KReclaimable: 199096 kB' 'Slab: 570780 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 371684 kB' 'KernelStack: 12784 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9449176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.097 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.098 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44389916 kB' 'MemAvailable: 47896744 kB' 'Buffers: 2704 kB' 'Cached: 11704528 kB' 'SwapCached: 0 kB' 'Active: 8719648 kB' 'Inactive: 3506552 kB' 'Active(anon): 8325296 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522204 kB' 'Mapped: 168608 kB' 'Shmem: 7806328 kB' 'KReclaimable: 199096 kB' 'Slab: 570812 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 371716 kB' 'KernelStack: 12800 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9449200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.099 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.100 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.101 nr_hugepages=1024 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.101 resv_hugepages=0 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.101 surplus_hugepages=0 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.101 anon_hugepages=0 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44389916 kB' 'MemAvailable: 47896744 kB' 'Buffers: 2704 kB' 'Cached: 11704548 kB' 'SwapCached: 0 kB' 'Active: 8719676 kB' 'Inactive: 3506552 kB' 'Active(anon): 8325324 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522200 kB' 'Mapped: 168608 kB' 'Shmem: 7806348 kB' 'KReclaimable: 199096 kB' 'Slab: 570812 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 371716 kB' 'KernelStack: 12800 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9449220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.101 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.102 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26539732 kB' 'MemUsed: 6290152 kB' 'SwapCached: 0 kB' 'Active: 2936580 kB' 'Inactive: 110044 kB' 'Active(anon): 2825692 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2750880 kB' 'Mapped: 44496 kB' 'AnonPages: 298888 kB' 'Shmem: 2529948 kB' 'KernelStack: 8280 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89472 kB' 'Slab: 309892 kB' 'SReclaimable: 89472 kB' 'SUnreclaim: 220420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.361 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.362 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.363 node0=1024 expecting 1024 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.363 00:03:57.363 real 0m2.412s 00:03:57.363 user 0m0.654s 00:03:57.363 sys 0m0.874s 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.363 06:51:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:57.363 ************************************ 00:03:57.363 END TEST default_setup 00:03:57.363 ************************************ 00:03:57.363 06:51:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:57.363 06:51:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:57.363 06:51:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.363 06:51:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.363 06:51:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.363 ************************************ 00:03:57.363 START TEST per_node_1G_alloc 00:03:57.363 ************************************ 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.363 06:51:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.295 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.295 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.295 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.295 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.295 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.295 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.295 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.295 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.295 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.295 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.295 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.295 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.295 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.295 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.295 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.295 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.295 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44380092 kB' 'MemAvailable: 47886920 kB' 'Buffers: 2704 kB' 'Cached: 11704620 kB' 'SwapCached: 0 kB' 'Active: 8725476 kB' 'Inactive: 3506552 kB' 'Active(anon): 8331124 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527856 kB' 'Mapped: 169092 kB' 'Shmem: 7806420 kB' 'KReclaimable: 199096 kB' 'Slab: 571052 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 371956 kB' 'KernelStack: 12816 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9455524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196164 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.559 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.560 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44379996 kB' 'MemAvailable: 47886824 kB' 'Buffers: 2704 kB' 'Cached: 11704620 kB' 'SwapCached: 0 kB' 'Active: 8725652 kB' 'Inactive: 3506552 kB' 'Active(anon): 8331300 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528068 kB' 'Mapped: 169472 kB' 'Shmem: 7806420 kB' 'KReclaimable: 199096 kB' 'Slab: 571036 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 371940 kB' 'KernelStack: 12832 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9455540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196132 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.561 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.562 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44386132 kB' 'MemAvailable: 47892960 kB' 'Buffers: 2704 kB' 'Cached: 11704640 kB' 'SwapCached: 0 kB' 'Active: 8720756 kB' 'Inactive: 3506552 kB' 'Active(anon): 8326404 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523140 kB' 'Mapped: 169472 kB' 'Shmem: 7806440 kB' 'KReclaimable: 199096 kB' 'Slab: 571148 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 372052 kB' 'KernelStack: 12816 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9451064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.563 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.564 nr_hugepages=1024 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.564 resv_hugepages=0 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.564 surplus_hugepages=0 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.564 anon_hugepages=0 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.564 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44384104 kB' 'MemAvailable: 47890932 kB' 'Buffers: 2704 kB' 'Cached: 11704664 kB' 'SwapCached: 0 kB' 'Active: 8723876 kB' 'Inactive: 3506552 kB' 'Active(anon): 8329524 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526644 kB' 'Mapped: 169056 kB' 'Shmem: 7806464 kB' 'KReclaimable: 199096 kB' 'Slab: 571140 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 372044 kB' 'KernelStack: 12800 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9454388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.565 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27590220 kB' 'MemUsed: 5239664 kB' 'SwapCached: 0 kB' 'Active: 2937780 kB' 'Inactive: 110044 kB' 'Active(anon): 2826892 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2750940 kB' 'Mapped: 45272 kB' 'AnonPages: 299964 kB' 'Shmem: 2530008 kB' 'KernelStack: 8344 kB' 'PageTables: 5136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89472 kB' 'Slab: 310012 kB' 'SReclaimable: 89472 kB' 'SUnreclaim: 220540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.566 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.567 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16789424 kB' 'MemUsed: 10922400 kB' 'SwapCached: 0 kB' 'Active: 5782468 kB' 'Inactive: 3396508 kB' 'Active(anon): 5499004 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8956468 kB' 'Mapped: 124264 kB' 'AnonPages: 222600 kB' 'Shmem: 5276496 kB' 'KernelStack: 4472 kB' 'PageTables: 2816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109624 kB' 'Slab: 261128 kB' 'SReclaimable: 109624 kB' 'SUnreclaim: 151504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.568 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:58.569 node0=512 expecting 512 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:58.569 node1=512 expecting 512 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:58.569 00:03:58.569 real 0m1.386s 00:03:58.569 user 0m0.579s 00:03:58.569 sys 0m0.767s 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.569 06:51:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.569 ************************************ 00:03:58.569 END TEST per_node_1G_alloc 00:03:58.569 ************************************ 00:03:58.827 06:51:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:58.827 06:51:28 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:58.827 06:51:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.827 06:51:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.827 06:51:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.827 ************************************ 00:03:58.827 START TEST even_2G_alloc 00:03:58.827 ************************************ 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.827 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.828 06:51:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.759 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.759 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.759 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.759 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.759 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.759 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.759 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.759 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.759 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.759 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.759 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.759 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.759 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.759 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.759 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.759 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.759 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44393068 kB' 'MemAvailable: 47899892 kB' 'Buffers: 2704 kB' 'Cached: 11704752 kB' 'SwapCached: 0 kB' 'Active: 8720684 kB' 'Inactive: 3506552 kB' 'Active(anon): 8326332 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522948 kB' 'Mapped: 168772 kB' 'Shmem: 7806552 kB' 'KReclaimable: 199088 kB' 'Slab: 571248 kB' 'SReclaimable: 199088 kB' 'SUnreclaim: 372160 kB' 'KernelStack: 12800 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9449296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.033 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.034 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44392828 kB' 'MemAvailable: 47899652 kB' 'Buffers: 2704 kB' 'Cached: 11704752 kB' 'SwapCached: 0 kB' 'Active: 8721100 kB' 'Inactive: 3506552 kB' 'Active(anon): 8326748 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523344 kB' 'Mapped: 168660 kB' 'Shmem: 7806552 kB' 'KReclaimable: 199088 kB' 'Slab: 571248 kB' 'SReclaimable: 199088 kB' 'SUnreclaim: 372160 kB' 'KernelStack: 12784 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9449316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.035 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44392548 kB' 'MemAvailable: 47899372 kB' 'Buffers: 2704 kB' 'Cached: 11704760 kB' 'SwapCached: 0 kB' 'Active: 8720116 kB' 'Inactive: 3506552 kB' 'Active(anon): 8325764 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522384 kB' 'Mapped: 168636 kB' 'Shmem: 7806560 kB' 'KReclaimable: 199088 kB' 'Slab: 571248 kB' 'SReclaimable: 199088 kB' 'SUnreclaim: 372160 kB' 'KernelStack: 12784 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9449340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.036 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.037 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.038 nr_hugepages=1024 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.038 resv_hugepages=0 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.038 surplus_hugepages=0 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.038 anon_hugepages=0 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44392548 kB' 'MemAvailable: 47899372 kB' 'Buffers: 2704 kB' 'Cached: 11704796 kB' 'SwapCached: 0 kB' 'Active: 8720156 kB' 'Inactive: 3506552 kB' 'Active(anon): 8325804 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522448 kB' 'Mapped: 168636 kB' 'Shmem: 7806596 kB' 'KReclaimable: 199088 kB' 'Slab: 571248 kB' 'SReclaimable: 199088 kB' 'SUnreclaim: 372160 kB' 'KernelStack: 12848 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9449728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.038 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.039 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27597684 kB' 'MemUsed: 5232200 kB' 'SwapCached: 0 kB' 'Active: 2937840 kB' 'Inactive: 110044 kB' 'Active(anon): 2826952 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2751008 kB' 'Mapped: 44524 kB' 'AnonPages: 300080 kB' 'Shmem: 2530076 kB' 'KernelStack: 8376 kB' 'PageTables: 5192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89472 kB' 'Slab: 310008 kB' 'SReclaimable: 89472 kB' 'SUnreclaim: 220536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.040 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.041 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16798352 kB' 'MemUsed: 10913472 kB' 'SwapCached: 0 kB' 'Active: 5782320 kB' 'Inactive: 3396508 kB' 'Active(anon): 5498856 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8956516 kB' 'Mapped: 124112 kB' 'AnonPages: 222376 kB' 'Shmem: 5276544 kB' 'KernelStack: 4472 kB' 'PageTables: 2736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109616 kB' 'Slab: 261240 kB' 'SReclaimable: 109616 kB' 'SUnreclaim: 151624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.042 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.043 node0=512 expecting 512 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:00.043 node1=512 expecting 512 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:00.043 00:04:00.043 real 0m1.419s 00:04:00.043 user 0m0.576s 00:04:00.043 sys 0m0.801s 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.043 06:51:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.043 ************************************ 00:04:00.043 END TEST even_2G_alloc 00:04:00.043 ************************************ 00:04:00.317 06:51:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.317 06:51:29 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:00.317 06:51:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.317 06:51:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.317 06:51:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.317 ************************************ 00:04:00.317 START TEST odd_alloc 00:04:00.317 ************************************ 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.317 06:51:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.254 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:01.254 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.254 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:01.254 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:01.254 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:01.254 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:01.254 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:01.254 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:01.254 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.254 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:01.254 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:01.254 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:01.254 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:01.254 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:01.254 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:01.254 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:01.254 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44398788 kB' 'MemAvailable: 47905620 kB' 'Buffers: 2704 kB' 'Cached: 11704892 kB' 'SwapCached: 0 kB' 'Active: 8718816 kB' 'Inactive: 3506552 kB' 'Active(anon): 8324464 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521100 kB' 'Mapped: 167760 kB' 'Shmem: 7806692 kB' 'KReclaimable: 199104 kB' 'Slab: 570976 kB' 'SReclaimable: 199104 kB' 'SUnreclaim: 371872 kB' 'KernelStack: 13216 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9438384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196416 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.520 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.521 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44401096 kB' 'MemAvailable: 47907924 kB' 'Buffers: 2704 kB' 'Cached: 11704892 kB' 'SwapCached: 0 kB' 'Active: 8719576 kB' 'Inactive: 3506552 kB' 'Active(anon): 8325224 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521748 kB' 'Mapped: 167820 kB' 'Shmem: 7806692 kB' 'KReclaimable: 199096 kB' 'Slab: 570984 kB' 'SReclaimable: 199096 kB' 'SUnreclaim: 371888 kB' 'KernelStack: 13376 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9438400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196368 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.522 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.523 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44402000 kB' 'MemAvailable: 47908820 kB' 'Buffers: 2704 kB' 'Cached: 11704912 kB' 'SwapCached: 0 kB' 'Active: 8718560 kB' 'Inactive: 3506552 kB' 'Active(anon): 8324208 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520712 kB' 'Mapped: 167784 kB' 'Shmem: 7806712 kB' 'KReclaimable: 199080 kB' 'Slab: 570936 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371856 kB' 'KernelStack: 13280 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9438420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196320 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.524 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:01.525 nr_hugepages=1025 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.525 resv_hugepages=0 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.525 surplus_hugepages=0 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.525 anon_hugepages=0 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44403980 kB' 'MemAvailable: 47910800 kB' 'Buffers: 2704 kB' 'Cached: 11704928 kB' 'SwapCached: 0 kB' 'Active: 8718524 kB' 'Inactive: 3506552 kB' 'Active(anon): 8324172 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520636 kB' 'Mapped: 167776 kB' 'Shmem: 7806728 kB' 'KReclaimable: 199080 kB' 'Slab: 570936 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371856 kB' 'KernelStack: 13088 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9436076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27603352 kB' 'MemUsed: 5226532 kB' 'SwapCached: 0 kB' 'Active: 2935916 kB' 'Inactive: 110044 kB' 'Active(anon): 2825028 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2751076 kB' 'Mapped: 43812 kB' 'AnonPages: 298016 kB' 'Shmem: 2530144 kB' 'KernelStack: 8296 kB' 'PageTables: 4828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89504 kB' 'Slab: 309872 kB' 'SReclaimable: 89504 kB' 'SUnreclaim: 220368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.528 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16801764 kB' 'MemUsed: 10910060 kB' 'SwapCached: 0 kB' 'Active: 5781616 kB' 'Inactive: 3396508 kB' 'Active(anon): 5498152 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8956600 kB' 'Mapped: 123972 kB' 'AnonPages: 221592 kB' 'Shmem: 5276628 kB' 'KernelStack: 4520 kB' 'PageTables: 2788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109576 kB' 'Slab: 261064 kB' 'SReclaimable: 109576 kB' 'SUnreclaim: 151488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.789 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:01.790 node0=512 expecting 513 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:01.790 node1=513 expecting 512 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:01.790 00:04:01.790 real 0m1.468s 00:04:01.790 user 0m0.612s 00:04:01.790 sys 0m0.811s 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.790 06:51:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.790 ************************************ 00:04:01.790 END TEST odd_alloc 00:04:01.790 ************************************ 00:04:01.790 06:51:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.790 06:51:31 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:01.790 06:51:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.790 06:51:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.790 06:51:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.790 ************************************ 00:04:01.790 START TEST custom_alloc 00:04:01.790 ************************************ 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:01.790 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:01.791 06:51:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:01.791 06:51:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.791 06:51:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.173 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.173 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.173 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.173 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.173 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.173 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.173 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.173 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.173 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.173 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.173 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.173 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.173 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.173 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.173 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.173 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.173 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43332624 kB' 'MemAvailable: 46839444 kB' 'Buffers: 2704 kB' 'Cached: 11705016 kB' 'SwapCached: 0 kB' 'Active: 8717564 kB' 'Inactive: 3506552 kB' 'Active(anon): 8323212 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519536 kB' 'Mapped: 167888 kB' 'Shmem: 7806816 kB' 'KReclaimable: 199080 kB' 'Slab: 570588 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371508 kB' 'KernelStack: 12880 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9436140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.173 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43333216 kB' 'MemAvailable: 46840036 kB' 'Buffers: 2704 kB' 'Cached: 11705016 kB' 'SwapCached: 0 kB' 'Active: 8718120 kB' 'Inactive: 3506552 kB' 'Active(anon): 8323768 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520080 kB' 'Mapped: 168288 kB' 'Shmem: 7806816 kB' 'KReclaimable: 199080 kB' 'Slab: 570596 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371516 kB' 'KernelStack: 12880 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9437380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.174 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43330612 kB' 'MemAvailable: 46837432 kB' 'Buffers: 2704 kB' 'Cached: 11705036 kB' 'SwapCached: 0 kB' 'Active: 8721608 kB' 'Inactive: 3506552 kB' 'Active(anon): 8327256 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523600 kB' 'Mapped: 168288 kB' 'Shmem: 7806836 kB' 'KReclaimable: 199080 kB' 'Slab: 570644 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371564 kB' 'KernelStack: 12896 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9440704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.176 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.177 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:03.178 nr_hugepages=1536 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.178 resv_hugepages=0 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.178 surplus_hugepages=0 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.178 anon_hugepages=0 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43327188 kB' 'MemAvailable: 46834008 kB' 'Buffers: 2704 kB' 'Cached: 11705036 kB' 'SwapCached: 0 kB' 'Active: 8723424 kB' 'Inactive: 3506552 kB' 'Active(anon): 8329072 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525432 kB' 'Mapped: 168768 kB' 'Shmem: 7806836 kB' 'KReclaimable: 199080 kB' 'Slab: 570644 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371564 kB' 'KernelStack: 12896 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9442320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196116 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27580148 kB' 'MemUsed: 5249736 kB' 'SwapCached: 0 kB' 'Active: 2937336 kB' 'Inactive: 110044 kB' 'Active(anon): 2826448 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2751080 kB' 'Mapped: 44644 kB' 'AnonPages: 299408 kB' 'Shmem: 2530148 kB' 'KernelStack: 8360 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89504 kB' 'Slab: 309728 kB' 'SReclaimable: 89504 kB' 'SUnreclaim: 220224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15746536 kB' 'MemUsed: 11965288 kB' 'SwapCached: 0 kB' 'Active: 5781832 kB' 'Inactive: 3396508 kB' 'Active(anon): 5498368 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8956724 kB' 'Mapped: 123972 kB' 'AnonPages: 221748 kB' 'Shmem: 5276752 kB' 'KernelStack: 4552 kB' 'PageTables: 2836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109576 kB' 'Slab: 260916 kB' 'SReclaimable: 109576 kB' 'SUnreclaim: 151340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.182 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.183 node0=512 expecting 512 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:03.183 node1=1024 expecting 1024 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:03.183 00:04:03.183 real 0m1.513s 00:04:03.183 user 0m0.637s 00:04:03.183 sys 0m0.839s 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.183 06:51:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.183 ************************************ 00:04:03.183 END TEST custom_alloc 00:04:03.183 ************************************ 00:04:03.183 06:51:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:03.183 06:51:32 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:03.183 06:51:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.183 06:51:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.183 06:51:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.183 ************************************ 00:04:03.183 START TEST no_shrink_alloc 00:04:03.183 ************************************ 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.183 06:51:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.560 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.560 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.560 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.560 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.560 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.560 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.560 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.560 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.560 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.560 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.560 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.560 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.560 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.560 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.560 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.560 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.560 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.560 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44367828 kB' 'MemAvailable: 47874648 kB' 'Buffers: 2704 kB' 'Cached: 11705148 kB' 'SwapCached: 0 kB' 'Active: 8718384 kB' 'Inactive: 3506552 kB' 'Active(anon): 8324032 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520368 kB' 'Mapped: 167924 kB' 'Shmem: 7806948 kB' 'KReclaimable: 199080 kB' 'Slab: 570528 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371448 kB' 'KernelStack: 12800 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9436604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44368012 kB' 'MemAvailable: 47874832 kB' 'Buffers: 2704 kB' 'Cached: 11705148 kB' 'SwapCached: 0 kB' 'Active: 8718480 kB' 'Inactive: 3506552 kB' 'Active(anon): 8324128 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519992 kB' 'Mapped: 167888 kB' 'Shmem: 7806948 kB' 'KReclaimable: 199080 kB' 'Slab: 570528 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371448 kB' 'KernelStack: 12864 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9436620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.561 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44367672 kB' 'MemAvailable: 47874492 kB' 'Buffers: 2704 kB' 'Cached: 11705152 kB' 'SwapCached: 0 kB' 'Active: 8717740 kB' 'Inactive: 3506552 kB' 'Active(anon): 8323388 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519632 kB' 'Mapped: 167812 kB' 'Shmem: 7806952 kB' 'KReclaimable: 199080 kB' 'Slab: 570520 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371440 kB' 'KernelStack: 12880 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9436644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.562 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.563 nr_hugepages=1024 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.563 resv_hugepages=0 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.563 surplus_hugepages=0 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.563 anon_hugepages=0 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44367672 kB' 'MemAvailable: 47874492 kB' 'Buffers: 2704 kB' 'Cached: 11705192 kB' 'SwapCached: 0 kB' 'Active: 8718012 kB' 'Inactive: 3506552 kB' 'Active(anon): 8323660 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519852 kB' 'Mapped: 167812 kB' 'Shmem: 7806992 kB' 'KReclaimable: 199080 kB' 'Slab: 570520 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371440 kB' 'KernelStack: 12864 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9436664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.563 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26532220 kB' 'MemUsed: 6297664 kB' 'SwapCached: 0 kB' 'Active: 2935744 kB' 'Inactive: 110044 kB' 'Active(anon): 2824856 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2751096 kB' 'Mapped: 43840 kB' 'AnonPages: 297788 kB' 'Shmem: 2530164 kB' 'KernelStack: 8280 kB' 'PageTables: 4736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89504 kB' 'Slab: 309528 kB' 'SReclaimable: 89504 kB' 'SUnreclaim: 220024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.564 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.565 node0=1024 expecting 1024 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.565 06:51:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.940 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.940 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.940 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.940 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.940 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.940 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.940 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.940 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.940 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.940 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.940 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.940 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.940 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.940 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.940 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.940 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.940 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.940 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.940 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44346712 kB' 'MemAvailable: 47853532 kB' 'Buffers: 2704 kB' 'Cached: 11705260 kB' 'SwapCached: 0 kB' 'Active: 8718116 kB' 'Inactive: 3506552 kB' 'Active(anon): 8323764 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519972 kB' 'Mapped: 167896 kB' 'Shmem: 7807060 kB' 'KReclaimable: 199080 kB' 'Slab: 570712 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371632 kB' 'KernelStack: 12880 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9437004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.941 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44347308 kB' 'MemAvailable: 47854128 kB' 'Buffers: 2704 kB' 'Cached: 11705264 kB' 'SwapCached: 0 kB' 'Active: 8718104 kB' 'Inactive: 3506552 kB' 'Active(anon): 8323752 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519952 kB' 'Mapped: 167820 kB' 'Shmem: 7807064 kB' 'KReclaimable: 199080 kB' 'Slab: 570712 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371632 kB' 'KernelStack: 12896 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9437024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.942 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.943 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44347732 kB' 'MemAvailable: 47854552 kB' 'Buffers: 2704 kB' 'Cached: 11705264 kB' 'SwapCached: 0 kB' 'Active: 8717812 kB' 'Inactive: 3506552 kB' 'Active(anon): 8323460 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519660 kB' 'Mapped: 167820 kB' 'Shmem: 7807064 kB' 'KReclaimable: 199080 kB' 'Slab: 570712 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371632 kB' 'KernelStack: 12896 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9437044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.944 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.945 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.946 nr_hugepages=1024 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.946 resv_hugepages=0 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.946 surplus_hugepages=0 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.946 anon_hugepages=0 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.946 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44348456 kB' 'MemAvailable: 47855276 kB' 'Buffers: 2704 kB' 'Cached: 11705304 kB' 'SwapCached: 0 kB' 'Active: 8718132 kB' 'Inactive: 3506552 kB' 'Active(anon): 8323780 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519952 kB' 'Mapped: 167820 kB' 'Shmem: 7807104 kB' 'KReclaimable: 199080 kB' 'Slab: 570712 kB' 'SReclaimable: 199080 kB' 'SUnreclaim: 371632 kB' 'KernelStack: 12896 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9437068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 15941632 kB' 'DirectMap1G: 51380224 kB' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.947 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26519184 kB' 'MemUsed: 6310700 kB' 'SwapCached: 0 kB' 'Active: 2936916 kB' 'Inactive: 110044 kB' 'Active(anon): 2826028 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2751104 kB' 'Mapped: 43848 kB' 'AnonPages: 299064 kB' 'Shmem: 2530172 kB' 'KernelStack: 8344 kB' 'PageTables: 4876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89504 kB' 'Slab: 309616 kB' 'SReclaimable: 89504 kB' 'SUnreclaim: 220112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.948 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.949 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.950 node0=1024 expecting 1024 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.950 00:04:05.950 real 0m2.736s 00:04:05.950 user 0m1.102s 00:04:05.950 sys 0m1.530s 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.950 06:51:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.950 ************************************ 00:04:05.950 END TEST no_shrink_alloc 00:04:05.950 ************************************ 00:04:05.950 06:51:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.950 06:51:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.950 00:04:05.950 real 0m11.340s 00:04:05.950 user 0m4.316s 00:04:05.950 sys 0m5.892s 00:04:05.950 06:51:35 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.950 06:51:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.950 ************************************ 00:04:05.950 END TEST hugepages 00:04:05.950 ************************************ 00:04:05.950 06:51:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:05.950 06:51:35 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:05.950 06:51:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.950 06:51:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.950 06:51:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.208 ************************************ 00:04:06.208 START TEST driver 00:04:06.208 ************************************ 00:04:06.208 06:51:35 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:06.208 * Looking for test storage... 00:04:06.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.208 06:51:35 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:06.208 06:51:35 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.208 06:51:35 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.761 06:51:37 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:08.761 06:51:37 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.761 06:51:37 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.761 06:51:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:08.761 ************************************ 00:04:08.761 START TEST guess_driver 00:04:08.761 ************************************ 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:08.761 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.761 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.761 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.761 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.761 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:08.761 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:08.761 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:08.761 Looking for driver=vfio-pci 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.761 06:51:37 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.692 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.692 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.692 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.949 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.949 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.949 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.949 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.949 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.949 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.949 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.949 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.950 06:51:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.884 06:51:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.884 06:51:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.884 06:51:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.884 06:51:40 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:10.884 06:51:40 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:10.884 06:51:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.884 06:51:40 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.446 00:04:13.446 real 0m4.893s 00:04:13.446 user 0m1.104s 00:04:13.446 sys 0m1.896s 00:04:13.446 06:51:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.446 06:51:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.446 ************************************ 00:04:13.446 END TEST guess_driver 00:04:13.446 ************************************ 00:04:13.446 06:51:42 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:13.446 00:04:13.446 real 0m7.461s 00:04:13.446 user 0m1.687s 00:04:13.446 sys 0m2.892s 00:04:13.446 06:51:42 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.446 06:51:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.446 ************************************ 00:04:13.446 END TEST driver 00:04:13.446 ************************************ 00:04:13.704 06:51:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:13.704 06:51:42 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:13.704 06:51:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.704 06:51:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.704 06:51:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.704 ************************************ 00:04:13.704 START TEST devices 00:04:13.704 ************************************ 00:04:13.704 06:51:42 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:13.704 * Looking for test storage... 00:04:13.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:13.704 06:51:42 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:13.704 06:51:42 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:13.704 06:51:42 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.704 06:51:42 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:15.077 06:51:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:15.077 06:51:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:15.077 06:51:44 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:15.077 06:51:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.077 06:51:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:15.077 06:51:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:15.077 06:51:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.077 06:51:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:15.077 06:51:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:15.077 06:51:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:15.077 06:51:44 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:15.077 No valid GPT data, bailing 00:04:15.335 06:51:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.335 06:51:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:15.335 06:51:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:15.335 06:51:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:15.335 06:51:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:15.335 06:51:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:15.335 06:51:44 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:15.335 06:51:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:15.335 06:51:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.335 06:51:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:15.335 06:51:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:15.335 06:51:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:15.335 06:51:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:15.335 06:51:44 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.335 06:51:44 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.335 06:51:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:15.335 ************************************ 00:04:15.335 START TEST nvme_mount 00:04:15.335 ************************************ 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:15.336 06:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:16.269 Creating new GPT entries in memory. 00:04:16.269 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:16.269 other utilities. 00:04:16.269 06:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:16.269 06:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.269 06:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.269 06:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.269 06:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:17.200 Creating new GPT entries in memory. 00:04:17.200 The operation has completed successfully. 00:04:17.200 06:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:17.200 06:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.200 06:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1371413 00:04:17.200 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.200 06:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:17.200 06:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.200 06:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:17.200 06:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:17.200 06:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.457 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.458 06:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.392 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.651 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.651 06:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.909 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:18.909 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:18.909 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.909 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.909 06:51:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.845 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.103 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:20.104 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.104 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.104 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:20.104 06:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.104 06:51:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.104 06:51:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:21.477 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.477 00:04:21.477 real 0m6.184s 00:04:21.477 user 0m1.412s 00:04:21.477 sys 0m2.354s 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.477 06:51:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:21.477 ************************************ 00:04:21.477 END TEST nvme_mount 00:04:21.477 ************************************ 00:04:21.477 06:51:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:21.477 06:51:50 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:21.477 06:51:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.477 06:51:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.477 06:51:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:21.477 ************************************ 00:04:21.477 START TEST dm_mount 00:04:21.477 ************************************ 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:21.477 06:51:50 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:22.411 Creating new GPT entries in memory. 00:04:22.411 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.411 other utilities. 00:04:22.411 06:51:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.411 06:51:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.411 06:51:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.411 06:51:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.411 06:51:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:23.785 Creating new GPT entries in memory. 00:04:23.785 The operation has completed successfully. 00:04:23.785 06:51:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.785 06:51:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.785 06:51:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:23.785 06:51:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:23.785 06:51:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:24.718 The operation has completed successfully. 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1373793 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.719 06:51:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.653 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.914 06:51:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.845 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.845 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:26.845 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:26.845 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.845 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.846 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:27.103 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:27.103 00:04:27.103 real 0m5.737s 00:04:27.103 user 0m0.996s 00:04:27.103 sys 0m1.598s 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.103 06:51:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:27.103 ************************************ 00:04:27.104 END TEST dm_mount 00:04:27.104 ************************************ 00:04:27.361 06:51:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:27.361 06:51:56 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:27.361 06:51:56 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:27.361 06:51:56 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.361 06:51:56 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.362 06:51:56 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:27.362 06:51:56 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.362 06:51:56 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.620 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:27.620 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:27.620 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:27.620 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:27.620 06:51:56 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:27.620 06:51:56 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.620 06:51:56 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:27.620 06:51:56 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.620 06:51:56 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:27.620 06:51:56 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.620 06:51:56 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:27.620 00:04:27.620 real 0m13.919s 00:04:27.620 user 0m3.105s 00:04:27.620 sys 0m5.020s 00:04:27.620 06:51:56 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.620 06:51:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:27.620 ************************************ 00:04:27.620 END TEST devices 00:04:27.620 ************************************ 00:04:27.620 06:51:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:27.620 00:04:27.620 real 0m43.220s 00:04:27.620 user 0m12.318s 00:04:27.620 sys 0m19.115s 00:04:27.620 06:51:56 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.620 06:51:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.620 ************************************ 00:04:27.620 END TEST setup.sh 00:04:27.620 ************************************ 00:04:27.620 06:51:56 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.620 06:51:56 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:28.599 Hugepages 00:04:28.599 node hugesize free / total 00:04:28.599 node0 1048576kB 0 / 0 00:04:28.599 node0 2048kB 2048 / 2048 00:04:28.599 node1 1048576kB 0 / 0 00:04:28.599 node1 2048kB 0 / 0 00:04:28.599 00:04:28.599 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:28.600 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:28.600 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:28.600 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:28.600 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:28.600 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:28.600 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:28.600 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:28.600 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:28.600 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:28.600 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:28.600 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:28.600 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:28.600 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:28.600 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:28.600 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:28.600 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:28.857 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:28.857 06:51:58 -- spdk/autotest.sh@130 -- # uname -s 00:04:28.857 06:51:58 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:28.857 06:51:58 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:28.857 06:51:58 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.789 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:29.789 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:29.789 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:29.789 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:29.789 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:29.789 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:29.789 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:29.789 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:30.046 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:30.046 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:30.046 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:30.046 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:30.046 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:30.046 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:30.046 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:30.046 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:30.979 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.979 06:52:00 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:31.913 06:52:01 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:31.913 06:52:01 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:31.913 06:52:01 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:31.913 06:52:01 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:31.913 06:52:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:31.913 06:52:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:31.913 06:52:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.913 06:52:01 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:31.913 06:52:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:32.171 06:52:01 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:32.171 06:52:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:32.171 06:52:01 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.104 Waiting for block devices as requested 00:04:33.104 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:33.362 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:33.362 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:33.620 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:33.620 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:33.620 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:33.878 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:33.878 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:33.878 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:33.878 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:33.878 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:34.135 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:34.135 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:34.135 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:34.391 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:34.391 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:34.391 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:34.648 06:52:03 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:34.648 06:52:03 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:34.648 06:52:03 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:34.648 06:52:03 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:34.648 06:52:03 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:34.648 06:52:03 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:34.648 06:52:03 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:34.648 06:52:03 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:34.648 06:52:03 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:34.648 06:52:03 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:34.648 06:52:03 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:34.648 06:52:03 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:34.648 06:52:03 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:34.648 06:52:03 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:34.648 06:52:03 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:34.648 06:52:03 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:34.648 06:52:03 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:34.648 06:52:03 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:34.648 06:52:03 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:34.648 06:52:03 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:34.648 06:52:03 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:34.648 06:52:03 -- common/autotest_common.sh@1557 -- # continue 00:04:34.648 06:52:03 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:34.648 06:52:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.648 06:52:03 -- common/autotest_common.sh@10 -- # set +x 00:04:34.648 06:52:03 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:34.648 06:52:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.648 06:52:03 -- common/autotest_common.sh@10 -- # set +x 00:04:34.648 06:52:03 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.017 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:36.017 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:36.017 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:36.017 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:36.017 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:36.017 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:36.017 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:36.017 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:36.017 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:36.017 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:36.017 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:36.017 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:36.017 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:36.017 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:36.017 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:36.017 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:36.946 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:36.946 06:52:06 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:36.946 06:52:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.946 06:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:36.946 06:52:06 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:36.946 06:52:06 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:36.946 06:52:06 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:36.946 06:52:06 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:36.946 06:52:06 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:36.946 06:52:06 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:36.946 06:52:06 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:36.946 06:52:06 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:36.947 06:52:06 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.947 06:52:06 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:36.947 06:52:06 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:36.947 06:52:06 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:36.947 06:52:06 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:36.947 06:52:06 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:36.947 06:52:06 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:36.947 06:52:06 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:36.947 06:52:06 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:36.947 06:52:06 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:36.947 06:52:06 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:36.947 06:52:06 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:36.947 06:52:06 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1378963 00:04:36.947 06:52:06 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.947 06:52:06 -- common/autotest_common.sh@1598 -- # waitforlisten 1378963 00:04:36.947 06:52:06 -- common/autotest_common.sh@829 -- # '[' -z 1378963 ']' 00:04:36.947 06:52:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.947 06:52:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.947 06:52:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.947 06:52:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.947 06:52:06 -- common/autotest_common.sh@10 -- # set +x 00:04:36.947 [2024-07-13 06:52:06.370999] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:36.947 [2024-07-13 06:52:06.371100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378963 ] 00:04:36.947 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.204 [2024-07-13 06:52:06.404020] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:37.204 [2024-07-13 06:52:06.430087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.204 [2024-07-13 06:52:06.512000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.460 06:52:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.460 06:52:06 -- common/autotest_common.sh@862 -- # return 0 00:04:37.460 06:52:06 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:37.460 06:52:06 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:37.460 06:52:06 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:40.739 nvme0n1 00:04:40.739 06:52:09 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:40.739 [2024-07-13 06:52:10.084050] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:40.739 [2024-07-13 06:52:10.084113] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:40.739 request: 00:04:40.739 { 00:04:40.739 "nvme_ctrlr_name": "nvme0", 00:04:40.739 "password": "test", 00:04:40.739 "method": "bdev_nvme_opal_revert", 00:04:40.739 "req_id": 1 00:04:40.739 } 00:04:40.739 Got JSON-RPC error response 00:04:40.739 response: 00:04:40.739 { 00:04:40.739 "code": -32603, 00:04:40.739 "message": "Internal error" 00:04:40.739 } 00:04:40.739 06:52:10 -- common/autotest_common.sh@1604 -- # true 00:04:40.739 06:52:10 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:40.739 06:52:10 -- common/autotest_common.sh@1608 -- # killprocess 1378963 00:04:40.739 06:52:10 -- common/autotest_common.sh@948 -- # '[' -z 1378963 ']' 00:04:40.739 06:52:10 -- common/autotest_common.sh@952 -- # kill -0 1378963 00:04:40.739 06:52:10 -- common/autotest_common.sh@953 -- # uname 00:04:40.739 06:52:10 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.739 06:52:10 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1378963 00:04:40.739 06:52:10 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.739 06:52:10 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.739 06:52:10 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1378963' 00:04:40.739 killing process with pid 1378963 00:04:40.739 06:52:10 -- common/autotest_common.sh@967 -- # kill 1378963 00:04:40.739 06:52:10 -- common/autotest_common.sh@972 -- # wait 1378963 00:04:42.636 06:52:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:42.636 06:52:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:42.636 06:52:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.636 06:52:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.636 06:52:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:42.636 06:52:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.636 06:52:11 -- common/autotest_common.sh@10 -- # set +x 00:04:42.636 06:52:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:42.636 06:52:11 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:42.636 06:52:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.636 06:52:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.636 06:52:11 -- common/autotest_common.sh@10 -- # set +x 00:04:42.636 ************************************ 00:04:42.636 START TEST env 00:04:42.636 ************************************ 00:04:42.636 06:52:11 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:42.636 * Looking for test storage... 00:04:42.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:42.637 06:52:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:42.637 06:52:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.637 06:52:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.637 06:52:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.637 ************************************ 00:04:42.637 START TEST env_memory 00:04:42.637 ************************************ 00:04:42.637 06:52:11 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:42.637 00:04:42.637 00:04:42.637 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.637 http://cunit.sourceforge.net/ 00:04:42.637 00:04:42.637 00:04:42.637 Suite: memory 00:04:42.637 Test: alloc and free memory map ...[2024-07-13 06:52:12.011517] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.637 passed 00:04:42.637 Test: mem map translation ...[2024-07-13 06:52:12.031106] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.637 [2024-07-13 06:52:12.031129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.637 [2024-07-13 06:52:12.031180] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.637 [2024-07-13 06:52:12.031191] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.637 passed 00:04:42.637 Test: mem map registration ...[2024-07-13 06:52:12.071567] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:42.637 [2024-07-13 06:52:12.071586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:42.637 passed 00:04:42.896 Test: mem map adjacent registrations ...passed 00:04:42.896 00:04:42.896 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.896 suites 1 1 n/a 0 0 00:04:42.896 tests 4 4 4 0 0 00:04:42.896 asserts 152 152 152 0 n/a 00:04:42.896 00:04:42.896 Elapsed time = 0.139 seconds 00:04:42.896 00:04:42.896 real 0m0.146s 00:04:42.896 user 0m0.140s 00:04:42.896 sys 0m0.006s 00:04:42.896 06:52:12 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.896 06:52:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:42.896 ************************************ 00:04:42.896 END TEST env_memory 00:04:42.896 ************************************ 00:04:42.896 06:52:12 env -- common/autotest_common.sh@1142 -- # return 0 00:04:42.896 06:52:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.896 06:52:12 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.896 06:52:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.896 06:52:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.896 ************************************ 00:04:42.896 START TEST env_vtophys 00:04:42.896 ************************************ 00:04:42.896 06:52:12 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.896 EAL: lib.eal log level changed from notice to debug 00:04:42.896 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.896 EAL: Detected lcore 1 as core 1 on socket 0 00:04:42.896 EAL: Detected lcore 2 as core 2 on socket 0 00:04:42.896 EAL: Detected lcore 3 as core 3 on socket 0 00:04:42.896 EAL: Detected lcore 4 as core 4 on socket 0 00:04:42.896 EAL: Detected lcore 5 as core 5 on socket 0 00:04:42.896 EAL: Detected lcore 6 as core 8 on socket 0 00:04:42.896 EAL: Detected lcore 7 as core 9 on socket 0 00:04:42.896 EAL: Detected lcore 8 as core 10 on socket 0 00:04:42.896 EAL: Detected lcore 9 as core 11 on socket 0 00:04:42.896 EAL: Detected lcore 10 as core 12 on socket 0 00:04:42.896 EAL: Detected lcore 11 as core 13 on socket 0 00:04:42.896 EAL: Detected lcore 12 as core 0 on socket 1 00:04:42.896 EAL: Detected lcore 13 as core 1 on socket 1 00:04:42.896 EAL: Detected lcore 14 as core 2 on socket 1 00:04:42.896 EAL: Detected lcore 15 as core 3 on socket 1 00:04:42.896 EAL: Detected lcore 16 as core 4 on socket 1 00:04:42.896 EAL: Detected lcore 17 as core 5 on socket 1 00:04:42.896 EAL: Detected lcore 18 as core 8 on socket 1 00:04:42.896 EAL: Detected lcore 19 as core 9 on socket 1 00:04:42.896 EAL: Detected lcore 20 as core 10 on socket 1 00:04:42.896 EAL: Detected lcore 21 as core 11 on socket 1 00:04:42.896 EAL: Detected lcore 22 as core 12 on socket 1 00:04:42.896 EAL: Detected lcore 23 as core 13 on socket 1 00:04:42.896 EAL: Detected lcore 24 as core 0 on socket 0 00:04:42.896 EAL: Detected lcore 25 as core 1 on socket 0 00:04:42.896 EAL: Detected lcore 26 as core 2 on socket 0 00:04:42.896 EAL: Detected lcore 27 as core 3 on socket 0 00:04:42.896 EAL: Detected lcore 28 as core 4 on socket 0 00:04:42.896 EAL: Detected lcore 29 as core 5 on socket 0 00:04:42.896 EAL: Detected lcore 30 as core 8 on socket 0 00:04:42.896 EAL: Detected lcore 31 as core 9 on socket 0 00:04:42.896 EAL: Detected lcore 32 as core 10 on socket 0 00:04:42.896 EAL: Detected lcore 33 as core 11 on socket 0 00:04:42.896 EAL: Detected lcore 34 as core 12 on socket 0 00:04:42.896 EAL: Detected lcore 35 as core 13 on socket 0 00:04:42.896 EAL: Detected lcore 36 as core 0 on socket 1 00:04:42.896 EAL: Detected lcore 37 as core 1 on socket 1 00:04:42.896 EAL: Detected lcore 38 as core 2 on socket 1 00:04:42.896 EAL: Detected lcore 39 as core 3 on socket 1 00:04:42.896 EAL: Detected lcore 40 as core 4 on socket 1 00:04:42.896 EAL: Detected lcore 41 as core 5 on socket 1 00:04:42.896 EAL: Detected lcore 42 as core 8 on socket 1 00:04:42.896 EAL: Detected lcore 43 as core 9 on socket 1 00:04:42.896 EAL: Detected lcore 44 as core 10 on socket 1 00:04:42.896 EAL: Detected lcore 45 as core 11 on socket 1 00:04:42.896 EAL: Detected lcore 46 as core 12 on socket 1 00:04:42.896 EAL: Detected lcore 47 as core 13 on socket 1 00:04:42.896 EAL: Maximum logical cores by configuration: 128 00:04:42.896 EAL: Detected CPU lcores: 48 00:04:42.896 EAL: Detected NUMA nodes: 2 00:04:42.896 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:42.896 EAL: Detected shared linkage of DPDK 00:04:42.896 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:42.896 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:42.896 EAL: Registered [vdev] bus. 00:04:42.896 EAL: bus.vdev log level changed from disabled to notice 00:04:42.896 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:42.896 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:42.896 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:42.896 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:42.896 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:42.896 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:42.896 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:42.896 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:42.896 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.896 EAL: No shared files mode enabled, IPC is disabled 00:04:42.896 EAL: Bus pci wants IOVA as 'DC' 00:04:42.896 EAL: Bus vdev wants IOVA as 'DC' 00:04:42.896 EAL: Buses did not request a specific IOVA mode. 00:04:42.896 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:42.896 EAL: Selected IOVA mode 'VA' 00:04:42.896 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.896 EAL: Probing VFIO support... 00:04:42.896 EAL: IOMMU type 1 (Type 1) is supported 00:04:42.896 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:42.896 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:42.896 EAL: VFIO support initialized 00:04:42.896 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.896 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.896 EAL: Setting up physically contiguous memory... 00:04:42.896 EAL: Setting maximum number of open files to 524288 00:04:42.896 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.896 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:42.896 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.896 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.896 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.896 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.896 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.896 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.896 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.896 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.896 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.896 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.896 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.896 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.896 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.896 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.896 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.896 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.896 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.896 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:42.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.896 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:42.896 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.896 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:42.896 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:42.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.896 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:42.896 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.896 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:42.896 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:42.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.896 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:42.896 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.896 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:42.896 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:42.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.896 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:42.896 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.896 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:42.896 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:42.896 EAL: Hugepages will be freed exactly as allocated. 00:04:42.896 EAL: No shared files mode enabled, IPC is disabled 00:04:42.896 EAL: No shared files mode enabled, IPC is disabled 00:04:42.896 EAL: TSC frequency is ~2700000 KHz 00:04:42.896 EAL: Main lcore 0 is ready (tid=7f6c20551a00;cpuset=[0]) 00:04:42.896 EAL: Trying to obtain current memory policy. 00:04:42.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.896 EAL: Restoring previous memory policy: 0 00:04:42.896 EAL: request: mp_malloc_sync 00:04:42.896 EAL: No shared files mode enabled, IPC is disabled 00:04:42.896 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.896 EAL: No shared files mode enabled, IPC is disabled 00:04:42.896 EAL: No shared files mode enabled, IPC is disabled 00:04:42.896 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.897 00:04:42.897 00:04:42.897 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.897 http://cunit.sourceforge.net/ 00:04:42.897 00:04:42.897 00:04:42.897 Suite: components_suite 00:04:42.897 Test: vtophys_malloc_test ...passed 00:04:42.897 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.897 EAL: Restoring previous memory policy: 4 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.897 EAL: Trying to obtain current memory policy. 00:04:42.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.897 EAL: Restoring previous memory policy: 4 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.897 EAL: Trying to obtain current memory policy. 00:04:42.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.897 EAL: Restoring previous memory policy: 4 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.897 EAL: Trying to obtain current memory policy. 00:04:42.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.897 EAL: Restoring previous memory policy: 4 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.897 EAL: Trying to obtain current memory policy. 00:04:42.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.897 EAL: Restoring previous memory policy: 4 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.897 EAL: Trying to obtain current memory policy. 00:04:42.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.897 EAL: Restoring previous memory policy: 4 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.897 EAL: Trying to obtain current memory policy. 00:04:42.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.897 EAL: Restoring previous memory policy: 4 00:04:42.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.897 EAL: request: mp_malloc_sync 00:04:42.897 EAL: No shared files mode enabled, IPC is disabled 00:04:42.897 EAL: Heap on socket 0 was expanded by 130MB 00:04:43.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.156 EAL: request: mp_malloc_sync 00:04:43.156 EAL: No shared files mode enabled, IPC is disabled 00:04:43.156 EAL: Heap on socket 0 was shrunk by 130MB 00:04:43.156 EAL: Trying to obtain current memory policy. 00:04:43.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.156 EAL: Restoring previous memory policy: 4 00:04:43.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.156 EAL: request: mp_malloc_sync 00:04:43.156 EAL: No shared files mode enabled, IPC is disabled 00:04:43.156 EAL: Heap on socket 0 was expanded by 258MB 00:04:43.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.156 EAL: request: mp_malloc_sync 00:04:43.156 EAL: No shared files mode enabled, IPC is disabled 00:04:43.156 EAL: Heap on socket 0 was shrunk by 258MB 00:04:43.156 EAL: Trying to obtain current memory policy. 00:04:43.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.414 EAL: Restoring previous memory policy: 4 00:04:43.414 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.414 EAL: request: mp_malloc_sync 00:04:43.414 EAL: No shared files mode enabled, IPC is disabled 00:04:43.414 EAL: Heap on socket 0 was expanded by 514MB 00:04:43.414 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.671 EAL: request: mp_malloc_sync 00:04:43.671 EAL: No shared files mode enabled, IPC is disabled 00:04:43.671 EAL: Heap on socket 0 was shrunk by 514MB 00:04:43.671 EAL: Trying to obtain current memory policy. 00:04:43.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.929 EAL: Restoring previous memory policy: 4 00:04:43.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.929 EAL: request: mp_malloc_sync 00:04:43.929 EAL: No shared files mode enabled, IPC is disabled 00:04:43.929 EAL: Heap on socket 0 was expanded by 1026MB 00:04:44.187 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.452 EAL: request: mp_malloc_sync 00:04:44.452 EAL: No shared files mode enabled, IPC is disabled 00:04:44.452 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.452 passed 00:04:44.452 00:04:44.452 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.452 suites 1 1 n/a 0 0 00:04:44.452 tests 2 2 2 0 0 00:04:44.452 asserts 497 497 497 0 n/a 00:04:44.452 00:04:44.452 Elapsed time = 1.390 seconds 00:04:44.452 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.452 EAL: request: mp_malloc_sync 00:04:44.452 EAL: No shared files mode enabled, IPC is disabled 00:04:44.452 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.452 EAL: No shared files mode enabled, IPC is disabled 00:04:44.452 EAL: No shared files mode enabled, IPC is disabled 00:04:44.452 EAL: No shared files mode enabled, IPC is disabled 00:04:44.452 00:04:44.452 real 0m1.506s 00:04:44.452 user 0m0.881s 00:04:44.452 sys 0m0.595s 00:04:44.452 06:52:13 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.452 06:52:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:44.452 ************************************ 00:04:44.452 END TEST env_vtophys 00:04:44.452 ************************************ 00:04:44.452 06:52:13 env -- common/autotest_common.sh@1142 -- # return 0 00:04:44.452 06:52:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.452 06:52:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.452 06:52:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.452 06:52:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.452 ************************************ 00:04:44.452 START TEST env_pci 00:04:44.452 ************************************ 00:04:44.452 06:52:13 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.452 00:04:44.452 00:04:44.452 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.452 http://cunit.sourceforge.net/ 00:04:44.452 00:04:44.452 00:04:44.452 Suite: pci 00:04:44.452 Test: pci_hook ...[2024-07-13 06:52:13.731589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1379853 has claimed it 00:04:44.452 EAL: Cannot find device (10000:00:01.0) 00:04:44.452 EAL: Failed to attach device on primary process 00:04:44.452 passed 00:04:44.452 00:04:44.452 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.452 suites 1 1 n/a 0 0 00:04:44.452 tests 1 1 1 0 0 00:04:44.452 asserts 25 25 25 0 n/a 00:04:44.452 00:04:44.452 Elapsed time = 0.021 seconds 00:04:44.452 00:04:44.452 real 0m0.032s 00:04:44.452 user 0m0.008s 00:04:44.452 sys 0m0.024s 00:04:44.452 06:52:13 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.452 06:52:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:44.452 ************************************ 00:04:44.452 END TEST env_pci 00:04:44.452 ************************************ 00:04:44.452 06:52:13 env -- common/autotest_common.sh@1142 -- # return 0 00:04:44.452 06:52:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:44.452 06:52:13 env -- env/env.sh@15 -- # uname 00:04:44.452 06:52:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:44.452 06:52:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:44.452 06:52:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.452 06:52:13 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:44.452 06:52:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.452 06:52:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.452 ************************************ 00:04:44.452 START TEST env_dpdk_post_init 00:04:44.452 ************************************ 00:04:44.452 06:52:13 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.452 EAL: Detected CPU lcores: 48 00:04:44.452 EAL: Detected NUMA nodes: 2 00:04:44.452 EAL: Detected shared linkage of DPDK 00:04:44.452 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.452 EAL: Selected IOVA mode 'VA' 00:04:44.452 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.452 EAL: VFIO support initialized 00:04:44.713 EAL: Using IOMMU type 1 (Type 1) 00:04:48.894 Starting DPDK initialization... 00:04:48.894 Starting SPDK post initialization... 00:04:48.894 SPDK NVMe probe 00:04:48.894 Attaching to 0000:88:00.0 00:04:48.894 Attached to 0000:88:00.0 00:04:48.894 Cleaning up... 00:04:48.894 00:04:48.894 real 0m4.390s 00:04:48.894 user 0m3.258s 00:04:48.894 sys 0m0.193s 00:04:48.894 06:52:18 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.894 06:52:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.894 ************************************ 00:04:48.894 END TEST env_dpdk_post_init 00:04:48.894 ************************************ 00:04:48.894 06:52:18 env -- common/autotest_common.sh@1142 -- # return 0 00:04:48.894 06:52:18 env -- env/env.sh@26 -- # uname 00:04:48.894 06:52:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:48.894 06:52:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.894 06:52:18 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.894 06:52:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.894 06:52:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.894 ************************************ 00:04:48.894 START TEST env_mem_callbacks 00:04:48.894 ************************************ 00:04:48.894 06:52:18 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.894 EAL: Detected CPU lcores: 48 00:04:48.894 EAL: Detected NUMA nodes: 2 00:04:48.894 EAL: Detected shared linkage of DPDK 00:04:48.894 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.894 EAL: Selected IOVA mode 'VA' 00:04:48.894 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.894 EAL: VFIO support initialized 00:04:48.894 00:04:48.894 00:04:48.894 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.894 http://cunit.sourceforge.net/ 00:04:48.894 00:04:48.894 00:04:48.894 Suite: memory 00:04:48.894 Test: test ... 00:04:48.894 register 0x200000200000 2097152 00:04:48.894 malloc 3145728 00:04:48.894 register 0x200000400000 4194304 00:04:48.894 buf 0x200000500000 len 3145728 PASSED 00:04:48.894 malloc 64 00:04:48.894 buf 0x2000004fff40 len 64 PASSED 00:04:48.894 malloc 4194304 00:04:48.894 register 0x200000800000 6291456 00:04:48.894 buf 0x200000a00000 len 4194304 PASSED 00:04:48.894 free 0x200000500000 3145728 00:04:48.894 free 0x2000004fff40 64 00:04:48.894 unregister 0x200000400000 4194304 PASSED 00:04:48.894 free 0x200000a00000 4194304 00:04:48.894 unregister 0x200000800000 6291456 PASSED 00:04:48.894 malloc 8388608 00:04:48.894 register 0x200000400000 10485760 00:04:48.894 buf 0x200000600000 len 8388608 PASSED 00:04:48.894 free 0x200000600000 8388608 00:04:48.894 unregister 0x200000400000 10485760 PASSED 00:04:48.894 passed 00:04:48.894 00:04:48.894 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.894 suites 1 1 n/a 0 0 00:04:48.894 tests 1 1 1 0 0 00:04:48.894 asserts 15 15 15 0 n/a 00:04:48.894 00:04:48.894 Elapsed time = 0.005 seconds 00:04:48.894 00:04:48.894 real 0m0.050s 00:04:48.894 user 0m0.015s 00:04:48.894 sys 0m0.035s 00:04:48.894 06:52:18 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.894 06:52:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:48.894 ************************************ 00:04:48.894 END TEST env_mem_callbacks 00:04:48.894 ************************************ 00:04:48.894 06:52:18 env -- common/autotest_common.sh@1142 -- # return 0 00:04:48.894 00:04:48.894 real 0m6.416s 00:04:48.894 user 0m4.415s 00:04:48.894 sys 0m1.048s 00:04:48.894 06:52:18 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.894 06:52:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.894 ************************************ 00:04:48.894 END TEST env 00:04:48.894 ************************************ 00:04:48.894 06:52:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.894 06:52:18 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:48.894 06:52:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.894 06:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.894 06:52:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.151 ************************************ 00:04:49.151 START TEST rpc 00:04:49.151 ************************************ 00:04:49.151 06:52:18 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.151 * Looking for test storage... 00:04:49.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.151 06:52:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1380512 00:04:49.151 06:52:18 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:49.151 06:52:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.151 06:52:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1380512 00:04:49.151 06:52:18 rpc -- common/autotest_common.sh@829 -- # '[' -z 1380512 ']' 00:04:49.151 06:52:18 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.151 06:52:18 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.151 06:52:18 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.151 06:52:18 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.151 06:52:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.151 [2024-07-13 06:52:18.461260] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:49.151 [2024-07-13 06:52:18.461353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380512 ] 00:04:49.151 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.151 [2024-07-13 06:52:18.495969] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:49.151 [2024-07-13 06:52:18.524458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.409 [2024-07-13 06:52:18.611293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.409 [2024-07-13 06:52:18.611341] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1380512' to capture a snapshot of events at runtime. 00:04:49.409 [2024-07-13 06:52:18.611368] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.409 [2024-07-13 06:52:18.611379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.409 [2024-07-13 06:52:18.611390] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1380512 for offline analysis/debug. 00:04:49.409 [2024-07-13 06:52:18.611415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.666 06:52:18 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.666 06:52:18 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:49.666 06:52:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.666 06:52:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.666 06:52:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:49.667 06:52:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:49.667 06:52:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.667 06:52:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.667 06:52:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.667 ************************************ 00:04:49.667 START TEST rpc_integrity 00:04:49.667 ************************************ 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.667 { 00:04:49.667 "name": "Malloc0", 00:04:49.667 "aliases": [ 00:04:49.667 "255a82e5-9214-44d6-b3bb-dd76b3ce07f3" 00:04:49.667 ], 00:04:49.667 "product_name": "Malloc disk", 00:04:49.667 "block_size": 512, 00:04:49.667 "num_blocks": 16384, 00:04:49.667 "uuid": "255a82e5-9214-44d6-b3bb-dd76b3ce07f3", 00:04:49.667 "assigned_rate_limits": { 00:04:49.667 "rw_ios_per_sec": 0, 00:04:49.667 "rw_mbytes_per_sec": 0, 00:04:49.667 "r_mbytes_per_sec": 0, 00:04:49.667 "w_mbytes_per_sec": 0 00:04:49.667 }, 00:04:49.667 "claimed": false, 00:04:49.667 "zoned": false, 00:04:49.667 "supported_io_types": { 00:04:49.667 "read": true, 00:04:49.667 "write": true, 00:04:49.667 "unmap": true, 00:04:49.667 "flush": true, 00:04:49.667 "reset": true, 00:04:49.667 "nvme_admin": false, 00:04:49.667 "nvme_io": false, 00:04:49.667 "nvme_io_md": false, 00:04:49.667 "write_zeroes": true, 00:04:49.667 "zcopy": true, 00:04:49.667 "get_zone_info": false, 00:04:49.667 "zone_management": false, 00:04:49.667 "zone_append": false, 00:04:49.667 "compare": false, 00:04:49.667 "compare_and_write": false, 00:04:49.667 "abort": true, 00:04:49.667 "seek_hole": false, 00:04:49.667 "seek_data": false, 00:04:49.667 "copy": true, 00:04:49.667 "nvme_iov_md": false 00:04:49.667 }, 00:04:49.667 "memory_domains": [ 00:04:49.667 { 00:04:49.667 "dma_device_id": "system", 00:04:49.667 "dma_device_type": 1 00:04:49.667 }, 00:04:49.667 { 00:04:49.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.667 "dma_device_type": 2 00:04:49.667 } 00:04:49.667 ], 00:04:49.667 "driver_specific": {} 00:04:49.667 } 00:04:49.667 ]' 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.667 06:52:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.667 06:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.667 [2024-07-13 06:52:19.004139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:49.667 [2024-07-13 06:52:19.004200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.667 [2024-07-13 06:52:19.004224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf2e7f0 00:04:49.667 [2024-07-13 06:52:19.004241] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.667 [2024-07-13 06:52:19.005733] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.667 [2024-07-13 06:52:19.005761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.667 Passthru0 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.667 06:52:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.667 06:52:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.667 { 00:04:49.667 "name": "Malloc0", 00:04:49.667 "aliases": [ 00:04:49.667 "255a82e5-9214-44d6-b3bb-dd76b3ce07f3" 00:04:49.667 ], 00:04:49.667 "product_name": "Malloc disk", 00:04:49.667 "block_size": 512, 00:04:49.667 "num_blocks": 16384, 00:04:49.667 "uuid": "255a82e5-9214-44d6-b3bb-dd76b3ce07f3", 00:04:49.667 "assigned_rate_limits": { 00:04:49.667 "rw_ios_per_sec": 0, 00:04:49.667 "rw_mbytes_per_sec": 0, 00:04:49.667 "r_mbytes_per_sec": 0, 00:04:49.667 "w_mbytes_per_sec": 0 00:04:49.667 }, 00:04:49.667 "claimed": true, 00:04:49.667 "claim_type": "exclusive_write", 00:04:49.667 "zoned": false, 00:04:49.667 "supported_io_types": { 00:04:49.667 "read": true, 00:04:49.667 "write": true, 00:04:49.667 "unmap": true, 00:04:49.667 "flush": true, 00:04:49.667 "reset": true, 00:04:49.667 "nvme_admin": false, 00:04:49.667 "nvme_io": false, 00:04:49.667 "nvme_io_md": false, 00:04:49.667 "write_zeroes": true, 00:04:49.667 "zcopy": true, 00:04:49.667 "get_zone_info": false, 00:04:49.667 "zone_management": false, 00:04:49.667 "zone_append": false, 00:04:49.667 "compare": false, 00:04:49.667 "compare_and_write": false, 00:04:49.667 "abort": true, 00:04:49.667 "seek_hole": false, 00:04:49.667 "seek_data": false, 00:04:49.667 "copy": true, 00:04:49.667 "nvme_iov_md": false 00:04:49.667 }, 00:04:49.667 "memory_domains": [ 00:04:49.667 { 00:04:49.667 "dma_device_id": "system", 00:04:49.667 "dma_device_type": 1 00:04:49.667 }, 00:04:49.667 { 00:04:49.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.667 "dma_device_type": 2 00:04:49.667 } 00:04:49.667 ], 00:04:49.667 "driver_specific": {} 00:04:49.667 }, 00:04:49.667 { 00:04:49.667 "name": "Passthru0", 00:04:49.667 "aliases": [ 00:04:49.667 "2afacc42-6a6d-5d0e-9291-d9e52bae160f" 00:04:49.667 ], 00:04:49.667 "product_name": "passthru", 00:04:49.667 "block_size": 512, 00:04:49.667 "num_blocks": 16384, 00:04:49.667 "uuid": "2afacc42-6a6d-5d0e-9291-d9e52bae160f", 00:04:49.667 "assigned_rate_limits": { 00:04:49.667 "rw_ios_per_sec": 0, 00:04:49.667 "rw_mbytes_per_sec": 0, 00:04:49.667 "r_mbytes_per_sec": 0, 00:04:49.667 "w_mbytes_per_sec": 0 00:04:49.667 }, 00:04:49.667 "claimed": false, 00:04:49.667 "zoned": false, 00:04:49.667 "supported_io_types": { 00:04:49.667 "read": true, 00:04:49.667 "write": true, 00:04:49.667 "unmap": true, 00:04:49.667 "flush": true, 00:04:49.667 "reset": true, 00:04:49.667 "nvme_admin": false, 00:04:49.667 "nvme_io": false, 00:04:49.667 "nvme_io_md": false, 00:04:49.667 "write_zeroes": true, 00:04:49.667 "zcopy": true, 00:04:49.667 "get_zone_info": false, 00:04:49.667 "zone_management": false, 00:04:49.667 "zone_append": false, 00:04:49.667 "compare": false, 00:04:49.667 "compare_and_write": false, 00:04:49.667 "abort": true, 00:04:49.667 "seek_hole": false, 00:04:49.667 "seek_data": false, 00:04:49.667 "copy": true, 00:04:49.667 "nvme_iov_md": false 00:04:49.667 }, 00:04:49.667 "memory_domains": [ 00:04:49.667 { 00:04:49.667 "dma_device_id": "system", 00:04:49.667 "dma_device_type": 1 00:04:49.667 }, 00:04:49.667 { 00:04:49.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.667 "dma_device_type": 2 00:04:49.667 } 00:04:49.667 ], 00:04:49.667 "driver_specific": { 00:04:49.667 "passthru": { 00:04:49.667 "name": "Passthru0", 00:04:49.667 "base_bdev_name": "Malloc0" 00:04:49.667 } 00:04:49.667 } 00:04:49.667 } 00:04:49.667 ]' 00:04:49.667 06:52:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:49.667 06:52:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.667 06:52:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.667 06:52:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.667 06:52:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.667 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.668 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.668 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.668 06:52:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.668 06:52:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:49.668 06:52:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.668 00:04:49.668 real 0m0.227s 00:04:49.668 user 0m0.146s 00:04:49.668 sys 0m0.021s 00:04:49.668 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.668 06:52:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.668 ************************************ 00:04:49.668 END TEST rpc_integrity 00:04:49.668 ************************************ 00:04:49.925 06:52:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:49.925 06:52:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:49.925 06:52:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.925 06:52:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.925 06:52:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.925 ************************************ 00:04:49.925 START TEST rpc_plugins 00:04:49.925 ************************************ 00:04:49.925 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:49.925 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:49.925 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.925 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.925 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.925 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:49.925 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:49.925 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.925 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.925 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.925 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:49.925 { 00:04:49.925 "name": "Malloc1", 00:04:49.925 "aliases": [ 00:04:49.925 "d9e5c0e6-bc21-4714-ab13-05f517619fd1" 00:04:49.925 ], 00:04:49.925 "product_name": "Malloc disk", 00:04:49.925 "block_size": 4096, 00:04:49.926 "num_blocks": 256, 00:04:49.926 "uuid": "d9e5c0e6-bc21-4714-ab13-05f517619fd1", 00:04:49.926 "assigned_rate_limits": { 00:04:49.926 "rw_ios_per_sec": 0, 00:04:49.926 "rw_mbytes_per_sec": 0, 00:04:49.926 "r_mbytes_per_sec": 0, 00:04:49.926 "w_mbytes_per_sec": 0 00:04:49.926 }, 00:04:49.926 "claimed": false, 00:04:49.926 "zoned": false, 00:04:49.926 "supported_io_types": { 00:04:49.926 "read": true, 00:04:49.926 "write": true, 00:04:49.926 "unmap": true, 00:04:49.926 "flush": true, 00:04:49.926 "reset": true, 00:04:49.926 "nvme_admin": false, 00:04:49.926 "nvme_io": false, 00:04:49.926 "nvme_io_md": false, 00:04:49.926 "write_zeroes": true, 00:04:49.926 "zcopy": true, 00:04:49.926 "get_zone_info": false, 00:04:49.926 "zone_management": false, 00:04:49.926 "zone_append": false, 00:04:49.926 "compare": false, 00:04:49.926 "compare_and_write": false, 00:04:49.926 "abort": true, 00:04:49.926 "seek_hole": false, 00:04:49.926 "seek_data": false, 00:04:49.926 "copy": true, 00:04:49.926 "nvme_iov_md": false 00:04:49.926 }, 00:04:49.926 "memory_domains": [ 00:04:49.926 { 00:04:49.926 "dma_device_id": "system", 00:04:49.926 "dma_device_type": 1 00:04:49.926 }, 00:04:49.926 { 00:04:49.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.926 "dma_device_type": 2 00:04:49.926 } 00:04:49.926 ], 00:04:49.926 "driver_specific": {} 00:04:49.926 } 00:04:49.926 ]' 00:04:49.926 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:49.926 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:49.926 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:49.926 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.926 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.926 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.926 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:49.926 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.926 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.926 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.926 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:49.926 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:49.926 06:52:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:49.926 00:04:49.926 real 0m0.114s 00:04:49.926 user 0m0.077s 00:04:49.926 sys 0m0.011s 00:04:49.926 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.926 06:52:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.926 ************************************ 00:04:49.926 END TEST rpc_plugins 00:04:49.926 ************************************ 00:04:49.926 06:52:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:49.926 06:52:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:49.926 06:52:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.926 06:52:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.926 06:52:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.926 ************************************ 00:04:49.926 START TEST rpc_trace_cmd_test 00:04:49.926 ************************************ 00:04:49.926 06:52:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:49.926 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:49.926 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:49.926 06:52:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.926 06:52:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:49.926 06:52:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.926 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:49.926 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1380512", 00:04:49.926 "tpoint_group_mask": "0x8", 00:04:49.926 "iscsi_conn": { 00:04:49.926 "mask": "0x2", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "scsi": { 00:04:49.926 "mask": "0x4", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "bdev": { 00:04:49.926 "mask": "0x8", 00:04:49.926 "tpoint_mask": "0xffffffffffffffff" 00:04:49.926 }, 00:04:49.926 "nvmf_rdma": { 00:04:49.926 "mask": "0x10", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "nvmf_tcp": { 00:04:49.926 "mask": "0x20", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "ftl": { 00:04:49.926 "mask": "0x40", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "blobfs": { 00:04:49.926 "mask": "0x80", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "dsa": { 00:04:49.926 "mask": "0x200", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "thread": { 00:04:49.926 "mask": "0x400", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "nvme_pcie": { 00:04:49.926 "mask": "0x800", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "iaa": { 00:04:49.926 "mask": "0x1000", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "nvme_tcp": { 00:04:49.926 "mask": "0x2000", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "bdev_nvme": { 00:04:49.926 "mask": "0x4000", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 }, 00:04:49.926 "sock": { 00:04:49.926 "mask": "0x8000", 00:04:49.926 "tpoint_mask": "0x0" 00:04:49.926 } 00:04:49.926 }' 00:04:49.926 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:49.926 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:49.926 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:50.183 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:50.183 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:50.183 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:50.183 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:50.183 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:50.183 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:50.183 06:52:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:50.183 00:04:50.183 real 0m0.194s 00:04:50.183 user 0m0.173s 00:04:50.183 sys 0m0.012s 00:04:50.184 06:52:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.184 06:52:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:50.184 ************************************ 00:04:50.184 END TEST rpc_trace_cmd_test 00:04:50.184 ************************************ 00:04:50.184 06:52:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.184 06:52:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:50.184 06:52:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:50.184 06:52:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:50.184 06:52:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.184 06:52:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.184 06:52:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.184 ************************************ 00:04:50.184 START TEST rpc_daemon_integrity 00:04:50.184 ************************************ 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.184 { 00:04:50.184 "name": "Malloc2", 00:04:50.184 "aliases": [ 00:04:50.184 "3f5cb60d-7351-403a-8b94-1f13982cce78" 00:04:50.184 ], 00:04:50.184 "product_name": "Malloc disk", 00:04:50.184 "block_size": 512, 00:04:50.184 "num_blocks": 16384, 00:04:50.184 "uuid": "3f5cb60d-7351-403a-8b94-1f13982cce78", 00:04:50.184 "assigned_rate_limits": { 00:04:50.184 "rw_ios_per_sec": 0, 00:04:50.184 "rw_mbytes_per_sec": 0, 00:04:50.184 "r_mbytes_per_sec": 0, 00:04:50.184 "w_mbytes_per_sec": 0 00:04:50.184 }, 00:04:50.184 "claimed": false, 00:04:50.184 "zoned": false, 00:04:50.184 "supported_io_types": { 00:04:50.184 "read": true, 00:04:50.184 "write": true, 00:04:50.184 "unmap": true, 00:04:50.184 "flush": true, 00:04:50.184 "reset": true, 00:04:50.184 "nvme_admin": false, 00:04:50.184 "nvme_io": false, 00:04:50.184 "nvme_io_md": false, 00:04:50.184 "write_zeroes": true, 00:04:50.184 "zcopy": true, 00:04:50.184 "get_zone_info": false, 00:04:50.184 "zone_management": false, 00:04:50.184 "zone_append": false, 00:04:50.184 "compare": false, 00:04:50.184 "compare_and_write": false, 00:04:50.184 "abort": true, 00:04:50.184 "seek_hole": false, 00:04:50.184 "seek_data": false, 00:04:50.184 "copy": true, 00:04:50.184 "nvme_iov_md": false 00:04:50.184 }, 00:04:50.184 "memory_domains": [ 00:04:50.184 { 00:04:50.184 "dma_device_id": "system", 00:04:50.184 "dma_device_type": 1 00:04:50.184 }, 00:04:50.184 { 00:04:50.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.184 "dma_device_type": 2 00:04:50.184 } 00:04:50.184 ], 00:04:50.184 "driver_specific": {} 00:04:50.184 } 00:04:50.184 ]' 00:04:50.184 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.442 [2024-07-13 06:52:19.670060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:50.442 [2024-07-13 06:52:19.670101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.442 [2024-07-13 06:52:19.670123] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10d2490 00:04:50.442 [2024-07-13 06:52:19.670157] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.442 [2024-07-13 06:52:19.671490] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.442 [2024-07-13 06:52:19.671519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.442 Passthru0 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.442 { 00:04:50.442 "name": "Malloc2", 00:04:50.442 "aliases": [ 00:04:50.442 "3f5cb60d-7351-403a-8b94-1f13982cce78" 00:04:50.442 ], 00:04:50.442 "product_name": "Malloc disk", 00:04:50.442 "block_size": 512, 00:04:50.442 "num_blocks": 16384, 00:04:50.442 "uuid": "3f5cb60d-7351-403a-8b94-1f13982cce78", 00:04:50.442 "assigned_rate_limits": { 00:04:50.442 "rw_ios_per_sec": 0, 00:04:50.442 "rw_mbytes_per_sec": 0, 00:04:50.442 "r_mbytes_per_sec": 0, 00:04:50.442 "w_mbytes_per_sec": 0 00:04:50.442 }, 00:04:50.442 "claimed": true, 00:04:50.442 "claim_type": "exclusive_write", 00:04:50.442 "zoned": false, 00:04:50.442 "supported_io_types": { 00:04:50.442 "read": true, 00:04:50.442 "write": true, 00:04:50.442 "unmap": true, 00:04:50.442 "flush": true, 00:04:50.442 "reset": true, 00:04:50.442 "nvme_admin": false, 00:04:50.442 "nvme_io": false, 00:04:50.442 "nvme_io_md": false, 00:04:50.442 "write_zeroes": true, 00:04:50.442 "zcopy": true, 00:04:50.442 "get_zone_info": false, 00:04:50.442 "zone_management": false, 00:04:50.442 "zone_append": false, 00:04:50.442 "compare": false, 00:04:50.442 "compare_and_write": false, 00:04:50.442 "abort": true, 00:04:50.442 "seek_hole": false, 00:04:50.442 "seek_data": false, 00:04:50.442 "copy": true, 00:04:50.442 "nvme_iov_md": false 00:04:50.442 }, 00:04:50.442 "memory_domains": [ 00:04:50.442 { 00:04:50.442 "dma_device_id": "system", 00:04:50.442 "dma_device_type": 1 00:04:50.442 }, 00:04:50.442 { 00:04:50.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.442 "dma_device_type": 2 00:04:50.442 } 00:04:50.442 ], 00:04:50.442 "driver_specific": {} 00:04:50.442 }, 00:04:50.442 { 00:04:50.442 "name": "Passthru0", 00:04:50.442 "aliases": [ 00:04:50.442 "405a415a-aedd-5ecb-9443-445ef427c1c0" 00:04:50.442 ], 00:04:50.442 "product_name": "passthru", 00:04:50.442 "block_size": 512, 00:04:50.442 "num_blocks": 16384, 00:04:50.442 "uuid": "405a415a-aedd-5ecb-9443-445ef427c1c0", 00:04:50.442 "assigned_rate_limits": { 00:04:50.442 "rw_ios_per_sec": 0, 00:04:50.442 "rw_mbytes_per_sec": 0, 00:04:50.442 "r_mbytes_per_sec": 0, 00:04:50.442 "w_mbytes_per_sec": 0 00:04:50.442 }, 00:04:50.442 "claimed": false, 00:04:50.442 "zoned": false, 00:04:50.442 "supported_io_types": { 00:04:50.442 "read": true, 00:04:50.442 "write": true, 00:04:50.442 "unmap": true, 00:04:50.442 "flush": true, 00:04:50.442 "reset": true, 00:04:50.442 "nvme_admin": false, 00:04:50.442 "nvme_io": false, 00:04:50.442 "nvme_io_md": false, 00:04:50.442 "write_zeroes": true, 00:04:50.442 "zcopy": true, 00:04:50.442 "get_zone_info": false, 00:04:50.442 "zone_management": false, 00:04:50.442 "zone_append": false, 00:04:50.442 "compare": false, 00:04:50.442 "compare_and_write": false, 00:04:50.442 "abort": true, 00:04:50.442 "seek_hole": false, 00:04:50.442 "seek_data": false, 00:04:50.442 "copy": true, 00:04:50.442 "nvme_iov_md": false 00:04:50.442 }, 00:04:50.442 "memory_domains": [ 00:04:50.442 { 00:04:50.442 "dma_device_id": "system", 00:04:50.442 "dma_device_type": 1 00:04:50.442 }, 00:04:50.442 { 00:04:50.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.442 "dma_device_type": 2 00:04:50.442 } 00:04:50.442 ], 00:04:50.442 "driver_specific": { 00:04:50.442 "passthru": { 00:04:50.442 "name": "Passthru0", 00:04:50.442 "base_bdev_name": "Malloc2" 00:04:50.442 } 00:04:50.442 } 00:04:50.442 } 00:04:50.442 ]' 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.442 00:04:50.442 real 0m0.222s 00:04:50.442 user 0m0.153s 00:04:50.442 sys 0m0.018s 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.442 06:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.442 ************************************ 00:04:50.442 END TEST rpc_daemon_integrity 00:04:50.442 ************************************ 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.442 06:52:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:50.442 06:52:19 rpc -- rpc/rpc.sh@84 -- # killprocess 1380512 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@948 -- # '[' -z 1380512 ']' 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@952 -- # kill -0 1380512 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@953 -- # uname 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1380512 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1380512' 00:04:50.442 killing process with pid 1380512 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@967 -- # kill 1380512 00:04:50.442 06:52:19 rpc -- common/autotest_common.sh@972 -- # wait 1380512 00:04:51.006 00:04:51.006 real 0m1.880s 00:04:51.006 user 0m2.374s 00:04:51.006 sys 0m0.579s 00:04:51.006 06:52:20 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.006 06:52:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.006 ************************************ 00:04:51.006 END TEST rpc 00:04:51.006 ************************************ 00:04:51.006 06:52:20 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.007 06:52:20 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:51.007 06:52:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.007 06:52:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.007 06:52:20 -- common/autotest_common.sh@10 -- # set +x 00:04:51.007 ************************************ 00:04:51.007 START TEST skip_rpc 00:04:51.007 ************************************ 00:04:51.007 06:52:20 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:51.007 * Looking for test storage... 00:04:51.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.007 06:52:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.007 06:52:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.007 06:52:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:51.007 06:52:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.007 06:52:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.007 06:52:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.007 ************************************ 00:04:51.007 START TEST skip_rpc 00:04:51.007 ************************************ 00:04:51.007 06:52:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:51.007 06:52:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1380947 00:04:51.007 06:52:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:51.007 06:52:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.007 06:52:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:51.007 [2024-07-13 06:52:20.426412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:51.007 [2024-07-13 06:52:20.426493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380947 ] 00:04:51.007 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.007 [2024-07-13 06:52:20.456780] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:51.264 [2024-07-13 06:52:20.485575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.264 [2024-07-13 06:52:20.575095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1380947 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1380947 ']' 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1380947 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1380947 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1380947' 00:04:56.516 killing process with pid 1380947 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1380947 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1380947 00:04:56.516 00:04:56.516 real 0m5.462s 00:04:56.516 user 0m5.158s 00:04:56.516 sys 0m0.305s 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.516 06:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.516 ************************************ 00:04:56.516 END TEST skip_rpc 00:04:56.516 ************************************ 00:04:56.516 06:52:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:56.516 06:52:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:56.516 06:52:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.516 06:52:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.516 06:52:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.516 ************************************ 00:04:56.516 START TEST skip_rpc_with_json 00:04:56.516 ************************************ 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1381638 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1381638 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1381638 ']' 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.516 06:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.516 [2024-07-13 06:52:25.940747] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:56.516 [2024-07-13 06:52:25.940851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381638 ] 00:04:56.516 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.774 [2024-07-13 06:52:25.972834] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:56.774 [2024-07-13 06:52:26.004973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.774 [2024-07-13 06:52:26.093225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.032 [2024-07-13 06:52:26.347243] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:57.032 request: 00:04:57.032 { 00:04:57.032 "trtype": "tcp", 00:04:57.032 "method": "nvmf_get_transports", 00:04:57.032 "req_id": 1 00:04:57.032 } 00:04:57.032 Got JSON-RPC error response 00:04:57.032 response: 00:04:57.032 { 00:04:57.032 "code": -19, 00:04:57.032 "message": "No such device" 00:04:57.032 } 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.032 [2024-07-13 06:52:26.355355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.032 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.291 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.291 06:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.291 { 00:04:57.291 "subsystems": [ 00:04:57.291 { 00:04:57.291 "subsystem": "vfio_user_target", 00:04:57.291 "config": null 00:04:57.291 }, 00:04:57.291 { 00:04:57.291 "subsystem": "keyring", 00:04:57.291 "config": [] 00:04:57.291 }, 00:04:57.291 { 00:04:57.291 "subsystem": "iobuf", 00:04:57.291 "config": [ 00:04:57.291 { 00:04:57.291 "method": "iobuf_set_options", 00:04:57.291 "params": { 00:04:57.291 "small_pool_count": 8192, 00:04:57.291 "large_pool_count": 1024, 00:04:57.291 "small_bufsize": 8192, 00:04:57.291 "large_bufsize": 135168 00:04:57.291 } 00:04:57.291 } 00:04:57.291 ] 00:04:57.291 }, 00:04:57.291 { 00:04:57.291 "subsystem": "sock", 00:04:57.291 "config": [ 00:04:57.291 { 00:04:57.291 "method": "sock_set_default_impl", 00:04:57.291 "params": { 00:04:57.291 "impl_name": "posix" 00:04:57.291 } 00:04:57.291 }, 00:04:57.291 { 00:04:57.291 "method": "sock_impl_set_options", 00:04:57.291 "params": { 00:04:57.291 "impl_name": "ssl", 00:04:57.291 "recv_buf_size": 4096, 00:04:57.291 "send_buf_size": 4096, 00:04:57.291 "enable_recv_pipe": true, 00:04:57.291 "enable_quickack": false, 00:04:57.291 "enable_placement_id": 0, 00:04:57.291 "enable_zerocopy_send_server": true, 00:04:57.291 "enable_zerocopy_send_client": false, 00:04:57.291 "zerocopy_threshold": 0, 00:04:57.291 "tls_version": 0, 00:04:57.291 "enable_ktls": false 00:04:57.291 } 00:04:57.291 }, 00:04:57.291 { 00:04:57.291 "method": "sock_impl_set_options", 00:04:57.291 "params": { 00:04:57.291 "impl_name": "posix", 00:04:57.291 "recv_buf_size": 2097152, 00:04:57.291 "send_buf_size": 2097152, 00:04:57.291 "enable_recv_pipe": true, 00:04:57.291 "enable_quickack": false, 00:04:57.291 "enable_placement_id": 0, 00:04:57.291 "enable_zerocopy_send_server": true, 00:04:57.291 "enable_zerocopy_send_client": false, 00:04:57.291 "zerocopy_threshold": 0, 00:04:57.291 "tls_version": 0, 00:04:57.291 "enable_ktls": false 00:04:57.291 } 00:04:57.291 } 00:04:57.291 ] 00:04:57.291 }, 00:04:57.291 { 00:04:57.291 "subsystem": "vmd", 00:04:57.291 "config": [] 00:04:57.291 }, 00:04:57.291 { 00:04:57.291 "subsystem": "accel", 00:04:57.291 "config": [ 00:04:57.291 { 00:04:57.291 "method": "accel_set_options", 00:04:57.291 "params": { 00:04:57.291 "small_cache_size": 128, 00:04:57.291 "large_cache_size": 16, 00:04:57.291 "task_count": 2048, 00:04:57.291 "sequence_count": 2048, 00:04:57.291 "buf_count": 2048 00:04:57.291 } 00:04:57.291 } 00:04:57.291 ] 00:04:57.291 }, 00:04:57.291 { 00:04:57.291 "subsystem": "bdev", 00:04:57.291 "config": [ 00:04:57.291 { 00:04:57.291 "method": "bdev_set_options", 00:04:57.291 "params": { 00:04:57.291 "bdev_io_pool_size": 65535, 00:04:57.291 "bdev_io_cache_size": 256, 00:04:57.291 "bdev_auto_examine": true, 00:04:57.291 "iobuf_small_cache_size": 128, 00:04:57.291 "iobuf_large_cache_size": 16 00:04:57.291 } 00:04:57.291 }, 00:04:57.291 { 00:04:57.291 "method": "bdev_raid_set_options", 00:04:57.291 "params": { 00:04:57.291 "process_window_size_kb": 1024 00:04:57.291 } 00:04:57.291 }, 00:04:57.291 { 00:04:57.291 "method": "bdev_iscsi_set_options", 00:04:57.291 "params": { 00:04:57.291 "timeout_sec": 30 00:04:57.291 } 00:04:57.291 }, 00:04:57.292 { 00:04:57.292 "method": "bdev_nvme_set_options", 00:04:57.292 "params": { 00:04:57.292 "action_on_timeout": "none", 00:04:57.292 "timeout_us": 0, 00:04:57.292 "timeout_admin_us": 0, 00:04:57.292 "keep_alive_timeout_ms": 10000, 00:04:57.292 "arbitration_burst": 0, 00:04:57.292 "low_priority_weight": 0, 00:04:57.292 "medium_priority_weight": 0, 00:04:57.292 "high_priority_weight": 0, 00:04:57.292 "nvme_adminq_poll_period_us": 10000, 00:04:57.292 "nvme_ioq_poll_period_us": 0, 00:04:57.292 "io_queue_requests": 0, 00:04:57.292 "delay_cmd_submit": true, 00:04:57.292 "transport_retry_count": 4, 00:04:57.292 "bdev_retry_count": 3, 00:04:57.292 "transport_ack_timeout": 0, 00:04:57.292 "ctrlr_loss_timeout_sec": 0, 00:04:57.292 "reconnect_delay_sec": 0, 00:04:57.292 "fast_io_fail_timeout_sec": 0, 00:04:57.292 "disable_auto_failback": false, 00:04:57.292 "generate_uuids": false, 00:04:57.292 "transport_tos": 0, 00:04:57.292 "nvme_error_stat": false, 00:04:57.292 "rdma_srq_size": 0, 00:04:57.292 "io_path_stat": false, 00:04:57.292 "allow_accel_sequence": false, 00:04:57.292 "rdma_max_cq_size": 0, 00:04:57.292 "rdma_cm_event_timeout_ms": 0, 00:04:57.292 "dhchap_digests": [ 00:04:57.292 "sha256", 00:04:57.292 "sha384", 00:04:57.292 "sha512" 00:04:57.292 ], 00:04:57.292 "dhchap_dhgroups": [ 00:04:57.292 "null", 00:04:57.292 "ffdhe2048", 00:04:57.292 "ffdhe3072", 00:04:57.292 "ffdhe4096", 00:04:57.292 "ffdhe6144", 00:04:57.292 "ffdhe8192" 00:04:57.292 ] 00:04:57.292 } 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "method": "bdev_nvme_set_hotplug", 00:04:57.292 "params": { 00:04:57.292 "period_us": 100000, 00:04:57.292 "enable": false 00:04:57.292 } 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "method": "bdev_wait_for_examine" 00:04:57.292 } 00:04:57.292 ] 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "subsystem": "scsi", 00:04:57.292 "config": null 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "subsystem": "scheduler", 00:04:57.292 "config": [ 00:04:57.292 { 00:04:57.292 "method": "framework_set_scheduler", 00:04:57.292 "params": { 00:04:57.292 "name": "static" 00:04:57.292 } 00:04:57.292 } 00:04:57.292 ] 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "subsystem": "vhost_scsi", 00:04:57.292 "config": [] 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "subsystem": "vhost_blk", 00:04:57.292 "config": [] 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "subsystem": "ublk", 00:04:57.292 "config": [] 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "subsystem": "nbd", 00:04:57.292 "config": [] 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "subsystem": "nvmf", 00:04:57.292 "config": [ 00:04:57.292 { 00:04:57.292 "method": "nvmf_set_config", 00:04:57.292 "params": { 00:04:57.292 "discovery_filter": "match_any", 00:04:57.292 "admin_cmd_passthru": { 00:04:57.292 "identify_ctrlr": false 00:04:57.292 } 00:04:57.292 } 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "method": "nvmf_set_max_subsystems", 00:04:57.292 "params": { 00:04:57.292 "max_subsystems": 1024 00:04:57.292 } 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "method": "nvmf_set_crdt", 00:04:57.292 "params": { 00:04:57.292 "crdt1": 0, 00:04:57.292 "crdt2": 0, 00:04:57.292 "crdt3": 0 00:04:57.292 } 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "method": "nvmf_create_transport", 00:04:57.292 "params": { 00:04:57.292 "trtype": "TCP", 00:04:57.292 "max_queue_depth": 128, 00:04:57.292 "max_io_qpairs_per_ctrlr": 127, 00:04:57.292 "in_capsule_data_size": 4096, 00:04:57.292 "max_io_size": 131072, 00:04:57.292 "io_unit_size": 131072, 00:04:57.292 "max_aq_depth": 128, 00:04:57.292 "num_shared_buffers": 511, 00:04:57.292 "buf_cache_size": 4294967295, 00:04:57.292 "dif_insert_or_strip": false, 00:04:57.292 "zcopy": false, 00:04:57.292 "c2h_success": true, 00:04:57.292 "sock_priority": 0, 00:04:57.292 "abort_timeout_sec": 1, 00:04:57.292 "ack_timeout": 0, 00:04:57.292 "data_wr_pool_size": 0 00:04:57.292 } 00:04:57.292 } 00:04:57.292 ] 00:04:57.292 }, 00:04:57.292 { 00:04:57.292 "subsystem": "iscsi", 00:04:57.292 "config": [ 00:04:57.292 { 00:04:57.292 "method": "iscsi_set_options", 00:04:57.292 "params": { 00:04:57.292 "node_base": "iqn.2016-06.io.spdk", 00:04:57.292 "max_sessions": 128, 00:04:57.292 "max_connections_per_session": 2, 00:04:57.292 "max_queue_depth": 64, 00:04:57.292 "default_time2wait": 2, 00:04:57.292 "default_time2retain": 20, 00:04:57.292 "first_burst_length": 8192, 00:04:57.292 "immediate_data": true, 00:04:57.292 "allow_duplicated_isid": false, 00:04:57.292 "error_recovery_level": 0, 00:04:57.292 "nop_timeout": 60, 00:04:57.292 "nop_in_interval": 30, 00:04:57.292 "disable_chap": false, 00:04:57.292 "require_chap": false, 00:04:57.292 "mutual_chap": false, 00:04:57.292 "chap_group": 0, 00:04:57.292 "max_large_datain_per_connection": 64, 00:04:57.292 "max_r2t_per_connection": 4, 00:04:57.292 "pdu_pool_size": 36864, 00:04:57.292 "immediate_data_pool_size": 16384, 00:04:57.292 "data_out_pool_size": 2048 00:04:57.292 } 00:04:57.292 } 00:04:57.292 ] 00:04:57.292 } 00:04:57.292 ] 00:04:57.292 } 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1381638 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1381638 ']' 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1381638 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1381638 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1381638' 00:04:57.292 killing process with pid 1381638 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1381638 00:04:57.292 06:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1381638 00:04:57.550 06:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1381780 00:04:57.550 06:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.550 06:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:02.848 06:52:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1381780 00:05:02.848 06:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1381780 ']' 00:05:02.848 06:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1381780 00:05:02.848 06:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:02.848 06:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.848 06:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1381780 00:05:02.849 06:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.849 06:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.849 06:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1381780' 00:05:02.849 killing process with pid 1381780 00:05:02.849 06:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1381780 00:05:02.849 06:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1381780 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:03.107 00:05:03.107 real 0m6.495s 00:05:03.107 user 0m6.093s 00:05:03.107 sys 0m0.677s 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.107 ************************************ 00:05:03.107 END TEST skip_rpc_with_json 00:05:03.107 ************************************ 00:05:03.107 06:52:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:03.107 06:52:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:03.107 06:52:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.107 06:52:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.107 06:52:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.107 ************************************ 00:05:03.107 START TEST skip_rpc_with_delay 00:05:03.107 ************************************ 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.107 [2024-07-13 06:52:32.483973] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:03.107 [2024-07-13 06:52:32.484076] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.107 00:05:03.107 real 0m0.068s 00:05:03.107 user 0m0.040s 00:05:03.107 sys 0m0.028s 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.107 06:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:03.107 ************************************ 00:05:03.107 END TEST skip_rpc_with_delay 00:05:03.107 ************************************ 00:05:03.107 06:52:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:03.107 06:52:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:03.107 06:52:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:03.107 06:52:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:03.107 06:52:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.107 06:52:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.107 06:52:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.107 ************************************ 00:05:03.107 START TEST exit_on_failed_rpc_init 00:05:03.107 ************************************ 00:05:03.107 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:03.107 06:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1382494 00:05:03.107 06:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.107 06:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1382494 00:05:03.107 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1382494 ']' 00:05:03.107 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.107 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.107 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.107 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.107 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.365 [2024-07-13 06:52:32.599453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:03.365 [2024-07-13 06:52:32.599535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382494 ] 00:05:03.365 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.365 [2024-07-13 06:52:32.631250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:03.365 [2024-07-13 06:52:32.656770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.365 [2024-07-13 06:52:32.744558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:03.622 06:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.622 [2024-07-13 06:52:33.043332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:03.622 [2024-07-13 06:52:33.043408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382511 ] 00:05:03.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.622 [2024-07-13 06:52:33.075084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:03.879 [2024-07-13 06:52:33.105837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.879 [2024-07-13 06:52:33.199002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.879 [2024-07-13 06:52:33.199100] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:03.879 [2024-07-13 06:52:33.199118] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:03.879 [2024-07-13 06:52:33.199130] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1382494 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1382494 ']' 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1382494 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1382494 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1382494' 00:05:03.879 killing process with pid 1382494 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1382494 00:05:03.879 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1382494 00:05:04.445 00:05:04.445 real 0m1.180s 00:05:04.445 user 0m1.283s 00:05:04.445 sys 0m0.452s 00:05:04.445 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.445 06:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.445 ************************************ 00:05:04.445 END TEST exit_on_failed_rpc_init 00:05:04.445 ************************************ 00:05:04.445 06:52:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.445 06:52:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.445 00:05:04.445 real 0m13.452s 00:05:04.445 user 0m12.677s 00:05:04.445 sys 0m1.623s 00:05:04.445 06:52:33 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.445 06:52:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.445 ************************************ 00:05:04.445 END TEST skip_rpc 00:05:04.445 ************************************ 00:05:04.445 06:52:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.445 06:52:33 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:04.445 06:52:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.445 06:52:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.445 06:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:04.445 ************************************ 00:05:04.445 START TEST rpc_client 00:05:04.445 ************************************ 00:05:04.446 06:52:33 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:04.446 * Looking for test storage... 00:05:04.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:04.446 06:52:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:04.446 OK 00:05:04.446 06:52:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:04.446 00:05:04.446 real 0m0.067s 00:05:04.446 user 0m0.026s 00:05:04.446 sys 0m0.046s 00:05:04.446 06:52:33 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.446 06:52:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:04.446 ************************************ 00:05:04.446 END TEST rpc_client 00:05:04.446 ************************************ 00:05:04.446 06:52:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.446 06:52:33 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:04.446 06:52:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.446 06:52:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.446 06:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:04.704 ************************************ 00:05:04.704 START TEST json_config 00:05:04.704 ************************************ 00:05:04.704 06:52:33 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:04.704 06:52:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.704 06:52:33 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.704 06:52:33 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.704 06:52:33 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.704 06:52:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.704 06:52:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.704 06:52:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.704 06:52:33 json_config -- paths/export.sh@5 -- # export PATH 00:05:04.704 06:52:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@47 -- # : 0 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:04.704 06:52:33 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:04.705 06:52:33 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:04.705 INFO: JSON configuration test init 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:04.705 06:52:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.705 06:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:04.705 06:52:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.705 06:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.705 06:52:33 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:04.705 06:52:33 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.705 06:52:33 json_config -- json_config/common.sh@10 -- # shift 00:05:04.705 06:52:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.705 06:52:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.705 06:52:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.705 06:52:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.705 06:52:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.705 06:52:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1382747 00:05:04.705 06:52:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:04.705 06:52:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.705 Waiting for target to run... 00:05:04.705 06:52:33 json_config -- json_config/common.sh@25 -- # waitforlisten 1382747 /var/tmp/spdk_tgt.sock 00:05:04.705 06:52:33 json_config -- common/autotest_common.sh@829 -- # '[' -z 1382747 ']' 00:05:04.705 06:52:33 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.705 06:52:33 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.705 06:52:33 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.705 06:52:33 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.705 06:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.705 [2024-07-13 06:52:34.014061] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:04.705 [2024-07-13 06:52:34.014148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382747 ] 00:05:04.705 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.963 [2024-07-13 06:52:34.321060] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:04.963 [2024-07-13 06:52:34.354674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.963 [2024-07-13 06:52:34.418445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.528 06:52:34 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.528 06:52:34 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:05.528 06:52:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:05.528 00:05:05.528 06:52:34 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:05.528 06:52:34 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:05.528 06:52:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.528 06:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.528 06:52:34 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:05.528 06:52:34 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:05.528 06:52:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.528 06:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.528 06:52:34 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:05.528 06:52:34 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:05.528 06:52:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:08.808 06:52:38 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:08.808 06:52:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:08.808 06:52:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.808 06:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.808 06:52:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:08.808 06:52:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:08.808 06:52:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:08.808 06:52:38 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:08.808 06:52:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:08.808 06:52:38 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:09.065 06:52:38 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:09.066 06:52:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.066 06:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:09.066 06:52:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.066 06:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:09.066 06:52:38 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.066 06:52:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.323 MallocForNvmf0 00:05:09.323 06:52:38 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.323 06:52:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.580 MallocForNvmf1 00:05:09.580 06:52:38 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.580 06:52:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.838 [2024-07-13 06:52:39.106327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.838 06:52:39 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.838 06:52:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.095 06:52:39 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.095 06:52:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.352 06:52:39 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.352 06:52:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.611 06:52:39 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.611 06:52:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.868 [2024-07-13 06:52:40.089533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.868 06:52:40 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:10.868 06:52:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.868 06:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.868 06:52:40 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:10.868 06:52:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.868 06:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.868 06:52:40 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:10.868 06:52:40 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.868 06:52:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.126 MallocBdevForConfigChangeCheck 00:05:11.126 06:52:40 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:11.126 06:52:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.126 06:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.126 06:52:40 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:11.126 06:52:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.383 06:52:40 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:11.384 INFO: shutting down applications... 00:05:11.384 06:52:40 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:11.384 06:52:40 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:11.384 06:52:40 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:11.384 06:52:40 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:13.279 Calling clear_iscsi_subsystem 00:05:13.279 Calling clear_nvmf_subsystem 00:05:13.279 Calling clear_nbd_subsystem 00:05:13.279 Calling clear_ublk_subsystem 00:05:13.279 Calling clear_vhost_blk_subsystem 00:05:13.279 Calling clear_vhost_scsi_subsystem 00:05:13.279 Calling clear_bdev_subsystem 00:05:13.279 06:52:42 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:13.279 06:52:42 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:13.279 06:52:42 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:13.279 06:52:42 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.279 06:52:42 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:13.279 06:52:42 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:13.538 06:52:42 json_config -- json_config/json_config.sh@345 -- # break 00:05:13.538 06:52:42 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:13.538 06:52:42 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:13.538 06:52:42 json_config -- json_config/common.sh@31 -- # local app=target 00:05:13.538 06:52:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.538 06:52:42 json_config -- json_config/common.sh@35 -- # [[ -n 1382747 ]] 00:05:13.538 06:52:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1382747 00:05:13.538 06:52:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.538 06:52:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.538 06:52:42 json_config -- json_config/common.sh@41 -- # kill -0 1382747 00:05:13.538 06:52:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.102 06:52:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.102 06:52:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.102 06:52:43 json_config -- json_config/common.sh@41 -- # kill -0 1382747 00:05:14.102 06:52:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.102 06:52:43 json_config -- json_config/common.sh@43 -- # break 00:05:14.102 06:52:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.102 06:52:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.102 SPDK target shutdown done 00:05:14.102 06:52:43 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:14.102 INFO: relaunching applications... 00:05:14.102 06:52:43 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.102 06:52:43 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.102 06:52:43 json_config -- json_config/common.sh@10 -- # shift 00:05:14.102 06:52:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.102 06:52:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.102 06:52:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.102 06:52:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.102 06:52:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.102 06:52:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1384011 00:05:14.102 06:52:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.102 06:52:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.102 Waiting for target to run... 00:05:14.102 06:52:43 json_config -- json_config/common.sh@25 -- # waitforlisten 1384011 /var/tmp/spdk_tgt.sock 00:05:14.102 06:52:43 json_config -- common/autotest_common.sh@829 -- # '[' -z 1384011 ']' 00:05:14.102 06:52:43 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.102 06:52:43 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.102 06:52:43 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.102 06:52:43 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.102 06:52:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.102 [2024-07-13 06:52:43.359964] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:14.102 [2024-07-13 06:52:43.360065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384011 ] 00:05:14.102 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.676 [2024-07-13 06:52:43.858436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:14.676 [2024-07-13 06:52:43.890092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.676 [2024-07-13 06:52:43.969304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.968 [2024-07-13 06:52:47.003874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.968 [2024-07-13 06:52:47.036334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.534 06:52:47 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.534 06:52:47 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:18.534 06:52:47 json_config -- json_config/common.sh@26 -- # echo '' 00:05:18.534 00:05:18.534 06:52:47 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:18.534 06:52:47 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:18.534 INFO: Checking if target configuration is the same... 00:05:18.534 06:52:47 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.534 06:52:47 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:18.534 06:52:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.534 + '[' 2 -ne 2 ']' 00:05:18.534 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:18.534 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:18.534 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.534 +++ basename /dev/fd/62 00:05:18.534 ++ mktemp /tmp/62.XXX 00:05:18.534 + tmp_file_1=/tmp/62.kOl 00:05:18.534 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.534 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.534 + tmp_file_2=/tmp/spdk_tgt_config.json.aAZ 00:05:18.534 + ret=0 00:05:18.534 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.792 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.792 + diff -u /tmp/62.kOl /tmp/spdk_tgt_config.json.aAZ 00:05:18.792 + echo 'INFO: JSON config files are the same' 00:05:18.792 INFO: JSON config files are the same 00:05:18.792 + rm /tmp/62.kOl /tmp/spdk_tgt_config.json.aAZ 00:05:18.792 + exit 0 00:05:18.792 06:52:48 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:18.792 06:52:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:18.792 INFO: changing configuration and checking if this can be detected... 00:05:18.792 06:52:48 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.792 06:52:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.050 06:52:48 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.050 06:52:48 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:19.050 06:52:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.050 + '[' 2 -ne 2 ']' 00:05:19.050 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.050 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:19.050 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.050 +++ basename /dev/fd/62 00:05:19.050 ++ mktemp /tmp/62.XXX 00:05:19.050 + tmp_file_1=/tmp/62.wt6 00:05:19.050 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.050 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.050 + tmp_file_2=/tmp/spdk_tgt_config.json.Ou0 00:05:19.050 + ret=0 00:05:19.050 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.614 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.614 + diff -u /tmp/62.wt6 /tmp/spdk_tgt_config.json.Ou0 00:05:19.614 + ret=1 00:05:19.614 + echo '=== Start of file: /tmp/62.wt6 ===' 00:05:19.614 + cat /tmp/62.wt6 00:05:19.614 + echo '=== End of file: /tmp/62.wt6 ===' 00:05:19.614 + echo '' 00:05:19.614 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Ou0 ===' 00:05:19.614 + cat /tmp/spdk_tgt_config.json.Ou0 00:05:19.614 + echo '=== End of file: /tmp/spdk_tgt_config.json.Ou0 ===' 00:05:19.614 + echo '' 00:05:19.614 + rm /tmp/62.wt6 /tmp/spdk_tgt_config.json.Ou0 00:05:19.614 + exit 1 00:05:19.614 06:52:48 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:19.614 INFO: configuration change detected. 00:05:19.614 06:52:48 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:19.614 06:52:48 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:19.614 06:52:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.614 06:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.614 06:52:48 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:19.614 06:52:48 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:19.614 06:52:48 json_config -- json_config/json_config.sh@317 -- # [[ -n 1384011 ]] 00:05:19.615 06:52:48 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:19.615 06:52:48 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.615 06:52:48 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:19.615 06:52:48 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:19.615 06:52:48 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:19.615 06:52:48 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:19.615 06:52:48 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:19.615 06:52:48 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.615 06:52:48 json_config -- json_config/json_config.sh@323 -- # killprocess 1384011 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@948 -- # '[' -z 1384011 ']' 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@952 -- # kill -0 1384011 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@953 -- # uname 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1384011 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1384011' 00:05:19.615 killing process with pid 1384011 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@967 -- # kill 1384011 00:05:19.615 06:52:48 json_config -- common/autotest_common.sh@972 -- # wait 1384011 00:05:21.511 06:52:50 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.511 06:52:50 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:21.511 06:52:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:21.511 06:52:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.511 06:52:50 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:21.511 06:52:50 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:21.511 INFO: Success 00:05:21.511 00:05:21.511 real 0m16.675s 00:05:21.511 user 0m18.519s 00:05:21.511 sys 0m2.073s 00:05:21.511 06:52:50 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.511 06:52:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.511 ************************************ 00:05:21.511 END TEST json_config 00:05:21.511 ************************************ 00:05:21.511 06:52:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.511 06:52:50 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.511 06:52:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.511 06:52:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.511 06:52:50 -- common/autotest_common.sh@10 -- # set +x 00:05:21.511 ************************************ 00:05:21.511 START TEST json_config_extra_key 00:05:21.511 ************************************ 00:05:21.511 06:52:50 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.511 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.511 06:52:50 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.511 06:52:50 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.511 06:52:50 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.511 06:52:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.511 06:52:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.511 06:52:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.511 06:52:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:21.511 06:52:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.511 06:52:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:21.512 06:52:50 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:21.512 06:52:50 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:21.512 INFO: launching applications... 00:05:21.512 06:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1384983 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.512 Waiting for target to run... 00:05:21.512 06:52:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1384983 /var/tmp/spdk_tgt.sock 00:05:21.512 06:52:50 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1384983 ']' 00:05:21.512 06:52:50 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.512 06:52:50 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.512 06:52:50 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.512 06:52:50 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.512 06:52:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.512 [2024-07-13 06:52:50.725609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:21.512 [2024-07-13 06:52:50.725691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384983 ] 00:05:21.512 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.770 [2024-07-13 06:52:51.023513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:21.770 [2024-07-13 06:52:51.056712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.770 [2024-07-13 06:52:51.120086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.336 06:52:51 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.336 06:52:51 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:22.336 06:52:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:22.336 00:05:22.336 06:52:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:22.336 INFO: shutting down applications... 00:05:22.336 06:52:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:22.336 06:52:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:22.336 06:52:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.336 06:52:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1384983 ]] 00:05:22.336 06:52:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1384983 00:05:22.336 06:52:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.336 06:52:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.336 06:52:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1384983 00:05:22.336 06:52:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.903 06:52:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.903 06:52:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.903 06:52:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1384983 00:05:22.903 06:52:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.903 06:52:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:22.903 06:52:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.903 06:52:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.903 SPDK target shutdown done 00:05:22.903 06:52:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:22.903 Success 00:05:22.903 00:05:22.903 real 0m1.532s 00:05:22.903 user 0m1.493s 00:05:22.903 sys 0m0.426s 00:05:22.903 06:52:52 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.903 06:52:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:22.903 ************************************ 00:05:22.903 END TEST json_config_extra_key 00:05:22.903 ************************************ 00:05:22.903 06:52:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.903 06:52:52 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.903 06:52:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.903 06:52:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.903 06:52:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.903 ************************************ 00:05:22.903 START TEST alias_rpc 00:05:22.903 ************************************ 00:05:22.903 06:52:52 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.903 * Looking for test storage... 00:05:22.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:22.903 06:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.903 06:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1385174 00:05:22.903 06:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.903 06:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1385174 00:05:22.903 06:52:52 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1385174 ']' 00:05:22.903 06:52:52 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.903 06:52:52 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.903 06:52:52 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.903 06:52:52 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.903 06:52:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.903 [2024-07-13 06:52:52.309580] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:22.903 [2024-07-13 06:52:52.309660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385174 ] 00:05:22.903 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.903 [2024-07-13 06:52:52.342669] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.162 [2024-07-13 06:52:52.371066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.162 [2024-07-13 06:52:52.454873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.420 06:52:52 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.420 06:52:52 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:23.420 06:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:23.678 06:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1385174 00:05:23.678 06:52:52 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1385174 ']' 00:05:23.678 06:52:52 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1385174 00:05:23.678 06:52:52 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:23.678 06:52:53 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.678 06:52:53 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1385174 00:05:23.678 06:52:53 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.678 06:52:53 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.678 06:52:53 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1385174' 00:05:23.678 killing process with pid 1385174 00:05:23.678 06:52:53 alias_rpc -- common/autotest_common.sh@967 -- # kill 1385174 00:05:23.678 06:52:53 alias_rpc -- common/autotest_common.sh@972 -- # wait 1385174 00:05:24.242 00:05:24.242 real 0m1.228s 00:05:24.242 user 0m1.328s 00:05:24.242 sys 0m0.431s 00:05:24.242 06:52:53 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.242 06:52:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.242 ************************************ 00:05:24.242 END TEST alias_rpc 00:05:24.242 ************************************ 00:05:24.242 06:52:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.242 06:52:53 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:24.242 06:52:53 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.242 06:52:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.242 06:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.242 06:52:53 -- common/autotest_common.sh@10 -- # set +x 00:05:24.242 ************************************ 00:05:24.242 START TEST spdkcli_tcp 00:05:24.242 ************************************ 00:05:24.242 06:52:53 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.242 * Looking for test storage... 00:05:24.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:24.242 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:24.242 06:52:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:24.242 06:52:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:24.242 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:24.242 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:24.242 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:24.242 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:24.243 06:52:53 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.243 06:52:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.243 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1385473 00:05:24.243 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:24.243 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1385473 00:05:24.243 06:52:53 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1385473 ']' 00:05:24.243 06:52:53 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.243 06:52:53 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.243 06:52:53 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.243 06:52:53 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.243 06:52:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.243 [2024-07-13 06:52:53.590914] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:24.243 [2024-07-13 06:52:53.590993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385473 ] 00:05:24.243 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.243 [2024-07-13 06:52:53.625732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:24.243 [2024-07-13 06:52:53.656041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.500 [2024-07-13 06:52:53.748198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.500 [2024-07-13 06:52:53.748203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.758 06:52:53 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.758 06:52:53 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:24.758 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1385487 00:05:24.758 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:24.758 06:52:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:25.017 [ 00:05:25.017 "bdev_malloc_delete", 00:05:25.017 "bdev_malloc_create", 00:05:25.017 "bdev_null_resize", 00:05:25.017 "bdev_null_delete", 00:05:25.017 "bdev_null_create", 00:05:25.017 "bdev_nvme_cuse_unregister", 00:05:25.017 "bdev_nvme_cuse_register", 00:05:25.017 "bdev_opal_new_user", 00:05:25.017 "bdev_opal_set_lock_state", 00:05:25.017 "bdev_opal_delete", 00:05:25.017 "bdev_opal_get_info", 00:05:25.017 "bdev_opal_create", 00:05:25.017 "bdev_nvme_opal_revert", 00:05:25.017 "bdev_nvme_opal_init", 00:05:25.017 "bdev_nvme_send_cmd", 00:05:25.017 "bdev_nvme_get_path_iostat", 00:05:25.017 "bdev_nvme_get_mdns_discovery_info", 00:05:25.017 "bdev_nvme_stop_mdns_discovery", 00:05:25.017 "bdev_nvme_start_mdns_discovery", 00:05:25.017 "bdev_nvme_set_multipath_policy", 00:05:25.017 "bdev_nvme_set_preferred_path", 00:05:25.017 "bdev_nvme_get_io_paths", 00:05:25.017 "bdev_nvme_remove_error_injection", 00:05:25.017 "bdev_nvme_add_error_injection", 00:05:25.017 "bdev_nvme_get_discovery_info", 00:05:25.017 "bdev_nvme_stop_discovery", 00:05:25.017 "bdev_nvme_start_discovery", 00:05:25.017 "bdev_nvme_get_controller_health_info", 00:05:25.017 "bdev_nvme_disable_controller", 00:05:25.017 "bdev_nvme_enable_controller", 00:05:25.017 "bdev_nvme_reset_controller", 00:05:25.017 "bdev_nvme_get_transport_statistics", 00:05:25.017 "bdev_nvme_apply_firmware", 00:05:25.017 "bdev_nvme_detach_controller", 00:05:25.017 "bdev_nvme_get_controllers", 00:05:25.017 "bdev_nvme_attach_controller", 00:05:25.017 "bdev_nvme_set_hotplug", 00:05:25.017 "bdev_nvme_set_options", 00:05:25.017 "bdev_passthru_delete", 00:05:25.017 "bdev_passthru_create", 00:05:25.017 "bdev_lvol_set_parent_bdev", 00:05:25.017 "bdev_lvol_set_parent", 00:05:25.017 "bdev_lvol_check_shallow_copy", 00:05:25.017 "bdev_lvol_start_shallow_copy", 00:05:25.017 "bdev_lvol_grow_lvstore", 00:05:25.017 "bdev_lvol_get_lvols", 00:05:25.017 "bdev_lvol_get_lvstores", 00:05:25.017 "bdev_lvol_delete", 00:05:25.017 "bdev_lvol_set_read_only", 00:05:25.017 "bdev_lvol_resize", 00:05:25.017 "bdev_lvol_decouple_parent", 00:05:25.017 "bdev_lvol_inflate", 00:05:25.017 "bdev_lvol_rename", 00:05:25.017 "bdev_lvol_clone_bdev", 00:05:25.017 "bdev_lvol_clone", 00:05:25.017 "bdev_lvol_snapshot", 00:05:25.017 "bdev_lvol_create", 00:05:25.017 "bdev_lvol_delete_lvstore", 00:05:25.017 "bdev_lvol_rename_lvstore", 00:05:25.017 "bdev_lvol_create_lvstore", 00:05:25.017 "bdev_raid_set_options", 00:05:25.017 "bdev_raid_remove_base_bdev", 00:05:25.017 "bdev_raid_add_base_bdev", 00:05:25.017 "bdev_raid_delete", 00:05:25.017 "bdev_raid_create", 00:05:25.017 "bdev_raid_get_bdevs", 00:05:25.017 "bdev_error_inject_error", 00:05:25.017 "bdev_error_delete", 00:05:25.017 "bdev_error_create", 00:05:25.017 "bdev_split_delete", 00:05:25.017 "bdev_split_create", 00:05:25.017 "bdev_delay_delete", 00:05:25.017 "bdev_delay_create", 00:05:25.017 "bdev_delay_update_latency", 00:05:25.017 "bdev_zone_block_delete", 00:05:25.017 "bdev_zone_block_create", 00:05:25.017 "blobfs_create", 00:05:25.017 "blobfs_detect", 00:05:25.017 "blobfs_set_cache_size", 00:05:25.017 "bdev_aio_delete", 00:05:25.017 "bdev_aio_rescan", 00:05:25.017 "bdev_aio_create", 00:05:25.017 "bdev_ftl_set_property", 00:05:25.017 "bdev_ftl_get_properties", 00:05:25.017 "bdev_ftl_get_stats", 00:05:25.017 "bdev_ftl_unmap", 00:05:25.017 "bdev_ftl_unload", 00:05:25.017 "bdev_ftl_delete", 00:05:25.017 "bdev_ftl_load", 00:05:25.017 "bdev_ftl_create", 00:05:25.017 "bdev_virtio_attach_controller", 00:05:25.017 "bdev_virtio_scsi_get_devices", 00:05:25.017 "bdev_virtio_detach_controller", 00:05:25.017 "bdev_virtio_blk_set_hotplug", 00:05:25.017 "bdev_iscsi_delete", 00:05:25.017 "bdev_iscsi_create", 00:05:25.017 "bdev_iscsi_set_options", 00:05:25.017 "accel_error_inject_error", 00:05:25.017 "ioat_scan_accel_module", 00:05:25.017 "dsa_scan_accel_module", 00:05:25.017 "iaa_scan_accel_module", 00:05:25.017 "vfu_virtio_create_scsi_endpoint", 00:05:25.017 "vfu_virtio_scsi_remove_target", 00:05:25.017 "vfu_virtio_scsi_add_target", 00:05:25.017 "vfu_virtio_create_blk_endpoint", 00:05:25.017 "vfu_virtio_delete_endpoint", 00:05:25.017 "keyring_file_remove_key", 00:05:25.017 "keyring_file_add_key", 00:05:25.017 "keyring_linux_set_options", 00:05:25.017 "iscsi_get_histogram", 00:05:25.017 "iscsi_enable_histogram", 00:05:25.017 "iscsi_set_options", 00:05:25.017 "iscsi_get_auth_groups", 00:05:25.017 "iscsi_auth_group_remove_secret", 00:05:25.017 "iscsi_auth_group_add_secret", 00:05:25.017 "iscsi_delete_auth_group", 00:05:25.017 "iscsi_create_auth_group", 00:05:25.017 "iscsi_set_discovery_auth", 00:05:25.017 "iscsi_get_options", 00:05:25.017 "iscsi_target_node_request_logout", 00:05:25.017 "iscsi_target_node_set_redirect", 00:05:25.017 "iscsi_target_node_set_auth", 00:05:25.017 "iscsi_target_node_add_lun", 00:05:25.017 "iscsi_get_stats", 00:05:25.017 "iscsi_get_connections", 00:05:25.017 "iscsi_portal_group_set_auth", 00:05:25.017 "iscsi_start_portal_group", 00:05:25.017 "iscsi_delete_portal_group", 00:05:25.017 "iscsi_create_portal_group", 00:05:25.017 "iscsi_get_portal_groups", 00:05:25.017 "iscsi_delete_target_node", 00:05:25.017 "iscsi_target_node_remove_pg_ig_maps", 00:05:25.017 "iscsi_target_node_add_pg_ig_maps", 00:05:25.017 "iscsi_create_target_node", 00:05:25.017 "iscsi_get_target_nodes", 00:05:25.017 "iscsi_delete_initiator_group", 00:05:25.017 "iscsi_initiator_group_remove_initiators", 00:05:25.017 "iscsi_initiator_group_add_initiators", 00:05:25.017 "iscsi_create_initiator_group", 00:05:25.017 "iscsi_get_initiator_groups", 00:05:25.017 "nvmf_set_crdt", 00:05:25.017 "nvmf_set_config", 00:05:25.017 "nvmf_set_max_subsystems", 00:05:25.017 "nvmf_stop_mdns_prr", 00:05:25.017 "nvmf_publish_mdns_prr", 00:05:25.017 "nvmf_subsystem_get_listeners", 00:05:25.017 "nvmf_subsystem_get_qpairs", 00:05:25.017 "nvmf_subsystem_get_controllers", 00:05:25.017 "nvmf_get_stats", 00:05:25.017 "nvmf_get_transports", 00:05:25.017 "nvmf_create_transport", 00:05:25.017 "nvmf_get_targets", 00:05:25.017 "nvmf_delete_target", 00:05:25.017 "nvmf_create_target", 00:05:25.017 "nvmf_subsystem_allow_any_host", 00:05:25.017 "nvmf_subsystem_remove_host", 00:05:25.017 "nvmf_subsystem_add_host", 00:05:25.017 "nvmf_ns_remove_host", 00:05:25.017 "nvmf_ns_add_host", 00:05:25.017 "nvmf_subsystem_remove_ns", 00:05:25.017 "nvmf_subsystem_add_ns", 00:05:25.017 "nvmf_subsystem_listener_set_ana_state", 00:05:25.017 "nvmf_discovery_get_referrals", 00:05:25.017 "nvmf_discovery_remove_referral", 00:05:25.017 "nvmf_discovery_add_referral", 00:05:25.017 "nvmf_subsystem_remove_listener", 00:05:25.017 "nvmf_subsystem_add_listener", 00:05:25.017 "nvmf_delete_subsystem", 00:05:25.017 "nvmf_create_subsystem", 00:05:25.017 "nvmf_get_subsystems", 00:05:25.017 "env_dpdk_get_mem_stats", 00:05:25.017 "nbd_get_disks", 00:05:25.017 "nbd_stop_disk", 00:05:25.017 "nbd_start_disk", 00:05:25.017 "ublk_recover_disk", 00:05:25.017 "ublk_get_disks", 00:05:25.017 "ublk_stop_disk", 00:05:25.017 "ublk_start_disk", 00:05:25.017 "ublk_destroy_target", 00:05:25.017 "ublk_create_target", 00:05:25.017 "virtio_blk_create_transport", 00:05:25.017 "virtio_blk_get_transports", 00:05:25.017 "vhost_controller_set_coalescing", 00:05:25.017 "vhost_get_controllers", 00:05:25.017 "vhost_delete_controller", 00:05:25.017 "vhost_create_blk_controller", 00:05:25.017 "vhost_scsi_controller_remove_target", 00:05:25.017 "vhost_scsi_controller_add_target", 00:05:25.017 "vhost_start_scsi_controller", 00:05:25.017 "vhost_create_scsi_controller", 00:05:25.018 "thread_set_cpumask", 00:05:25.018 "framework_get_governor", 00:05:25.018 "framework_get_scheduler", 00:05:25.018 "framework_set_scheduler", 00:05:25.018 "framework_get_reactors", 00:05:25.018 "thread_get_io_channels", 00:05:25.018 "thread_get_pollers", 00:05:25.018 "thread_get_stats", 00:05:25.018 "framework_monitor_context_switch", 00:05:25.018 "spdk_kill_instance", 00:05:25.018 "log_enable_timestamps", 00:05:25.018 "log_get_flags", 00:05:25.018 "log_clear_flag", 00:05:25.018 "log_set_flag", 00:05:25.018 "log_get_level", 00:05:25.018 "log_set_level", 00:05:25.018 "log_get_print_level", 00:05:25.018 "log_set_print_level", 00:05:25.018 "framework_enable_cpumask_locks", 00:05:25.018 "framework_disable_cpumask_locks", 00:05:25.018 "framework_wait_init", 00:05:25.018 "framework_start_init", 00:05:25.018 "scsi_get_devices", 00:05:25.018 "bdev_get_histogram", 00:05:25.018 "bdev_enable_histogram", 00:05:25.018 "bdev_set_qos_limit", 00:05:25.018 "bdev_set_qd_sampling_period", 00:05:25.018 "bdev_get_bdevs", 00:05:25.018 "bdev_reset_iostat", 00:05:25.018 "bdev_get_iostat", 00:05:25.018 "bdev_examine", 00:05:25.018 "bdev_wait_for_examine", 00:05:25.018 "bdev_set_options", 00:05:25.018 "notify_get_notifications", 00:05:25.018 "notify_get_types", 00:05:25.018 "accel_get_stats", 00:05:25.018 "accel_set_options", 00:05:25.018 "accel_set_driver", 00:05:25.018 "accel_crypto_key_destroy", 00:05:25.018 "accel_crypto_keys_get", 00:05:25.018 "accel_crypto_key_create", 00:05:25.018 "accel_assign_opc", 00:05:25.018 "accel_get_module_info", 00:05:25.018 "accel_get_opc_assignments", 00:05:25.018 "vmd_rescan", 00:05:25.018 "vmd_remove_device", 00:05:25.018 "vmd_enable", 00:05:25.018 "sock_get_default_impl", 00:05:25.018 "sock_set_default_impl", 00:05:25.018 "sock_impl_set_options", 00:05:25.018 "sock_impl_get_options", 00:05:25.018 "iobuf_get_stats", 00:05:25.018 "iobuf_set_options", 00:05:25.018 "keyring_get_keys", 00:05:25.018 "framework_get_pci_devices", 00:05:25.018 "framework_get_config", 00:05:25.018 "framework_get_subsystems", 00:05:25.018 "vfu_tgt_set_base_path", 00:05:25.018 "trace_get_info", 00:05:25.018 "trace_get_tpoint_group_mask", 00:05:25.018 "trace_disable_tpoint_group", 00:05:25.018 "trace_enable_tpoint_group", 00:05:25.018 "trace_clear_tpoint_mask", 00:05:25.018 "trace_set_tpoint_mask", 00:05:25.018 "spdk_get_version", 00:05:25.018 "rpc_get_methods" 00:05:25.018 ] 00:05:25.018 06:52:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.018 06:52:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:25.018 06:52:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1385473 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1385473 ']' 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1385473 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1385473 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1385473' 00:05:25.018 killing process with pid 1385473 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1385473 00:05:25.018 06:52:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1385473 00:05:25.277 00:05:25.277 real 0m1.199s 00:05:25.277 user 0m2.101s 00:05:25.277 sys 0m0.447s 00:05:25.277 06:52:54 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.277 06:52:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.277 ************************************ 00:05:25.277 END TEST spdkcli_tcp 00:05:25.277 ************************************ 00:05:25.277 06:52:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.277 06:52:54 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.277 06:52:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.277 06:52:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.277 06:52:54 -- common/autotest_common.sh@10 -- # set +x 00:05:25.277 ************************************ 00:05:25.277 START TEST dpdk_mem_utility 00:05:25.277 ************************************ 00:05:25.277 06:52:54 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.535 * Looking for test storage... 00:05:25.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:25.535 06:52:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:25.535 06:52:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1385673 00:05:25.535 06:52:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.535 06:52:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1385673 00:05:25.535 06:52:54 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1385673 ']' 00:05:25.535 06:52:54 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.535 06:52:54 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.535 06:52:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.535 06:52:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.535 06:52:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.535 [2024-07-13 06:52:54.825258] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:25.535 [2024-07-13 06:52:54.825338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385673 ] 00:05:25.535 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.535 [2024-07-13 06:52:54.856659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.535 [2024-07-13 06:52:54.882892] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.535 [2024-07-13 06:52:54.966240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.794 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.794 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:25.794 06:52:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:25.794 06:52:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:25.794 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.794 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.794 { 00:05:25.794 "filename": "/tmp/spdk_mem_dump.txt" 00:05:25.794 } 00:05:25.794 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.794 06:52:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:26.054 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:26.054 1 heaps totaling size 814.000000 MiB 00:05:26.054 size: 814.000000 MiB heap id: 0 00:05:26.054 end heaps---------- 00:05:26.054 8 mempools totaling size 598.116089 MiB 00:05:26.054 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:26.054 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:26.054 size: 84.521057 MiB name: bdev_io_1385673 00:05:26.054 size: 51.011292 MiB name: evtpool_1385673 00:05:26.054 size: 50.003479 MiB name: msgpool_1385673 00:05:26.054 size: 21.763794 MiB name: PDU_Pool 00:05:26.054 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:26.054 size: 0.026123 MiB name: Session_Pool 00:05:26.054 end mempools------- 00:05:26.054 6 memzones totaling size 4.142822 MiB 00:05:26.054 size: 1.000366 MiB name: RG_ring_0_1385673 00:05:26.054 size: 1.000366 MiB name: RG_ring_1_1385673 00:05:26.054 size: 1.000366 MiB name: RG_ring_4_1385673 00:05:26.054 size: 1.000366 MiB name: RG_ring_5_1385673 00:05:26.054 size: 0.125366 MiB name: RG_ring_2_1385673 00:05:26.054 size: 0.015991 MiB name: RG_ring_3_1385673 00:05:26.054 end memzones------- 00:05:26.054 06:52:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:26.054 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:26.054 list of free elements. size: 12.519348 MiB 00:05:26.054 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:26.054 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:26.054 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:26.054 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:26.054 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:26.054 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:26.054 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:26.054 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:26.054 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:26.054 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:26.054 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:26.054 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:26.054 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:26.054 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:26.054 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:26.054 list of standard malloc elements. size: 199.218079 MiB 00:05:26.054 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:26.054 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:26.054 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:26.054 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:26.054 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:26.054 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:26.054 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:26.054 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:26.054 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:26.054 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:26.054 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:26.054 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:26.054 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:26.054 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:26.054 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:26.054 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:26.054 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:26.054 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:26.054 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:26.054 list of memzone associated elements. size: 602.262573 MiB 00:05:26.054 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:26.054 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:26.054 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:26.054 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:26.054 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:26.054 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1385673_0 00:05:26.054 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:26.054 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1385673_0 00:05:26.054 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:26.054 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1385673_0 00:05:26.054 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:26.054 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:26.054 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:26.054 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:26.054 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:26.054 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1385673 00:05:26.054 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:26.054 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1385673 00:05:26.054 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:26.054 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1385673 00:05:26.054 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:26.054 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:26.054 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:26.054 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:26.054 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:26.054 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:26.054 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:26.054 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:26.054 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:26.054 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1385673 00:05:26.054 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:26.054 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1385673 00:05:26.054 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:26.054 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1385673 00:05:26.054 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:26.054 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1385673 00:05:26.054 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:26.054 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1385673 00:05:26.054 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:26.054 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:26.054 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:26.054 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:26.054 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:26.054 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:26.054 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:26.054 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1385673 00:05:26.054 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:26.054 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:26.054 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:26.054 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:26.054 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:26.054 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1385673 00:05:26.054 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:26.054 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:26.054 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:26.054 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1385673 00:05:26.054 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:26.054 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1385673 00:05:26.054 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:26.054 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:26.055 06:52:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:26.055 06:52:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1385673 00:05:26.055 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1385673 ']' 00:05:26.055 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1385673 00:05:26.055 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:26.055 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.055 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1385673 00:05:26.055 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.055 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.055 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1385673' 00:05:26.055 killing process with pid 1385673 00:05:26.055 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1385673 00:05:26.055 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1385673 00:05:26.622 00:05:26.622 real 0m1.048s 00:05:26.622 user 0m1.021s 00:05:26.622 sys 0m0.395s 00:05:26.622 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.622 06:52:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.622 ************************************ 00:05:26.622 END TEST dpdk_mem_utility 00:05:26.622 ************************************ 00:05:26.622 06:52:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.622 06:52:55 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:26.622 06:52:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.622 06:52:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.622 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:05:26.622 ************************************ 00:05:26.622 START TEST event 00:05:26.622 ************************************ 00:05:26.622 06:52:55 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:26.622 * Looking for test storage... 00:05:26.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:26.622 06:52:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:26.622 06:52:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:26.622 06:52:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.622 06:52:55 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:26.622 06:52:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.622 06:52:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.622 ************************************ 00:05:26.622 START TEST event_perf 00:05:26.622 ************************************ 00:05:26.622 06:52:55 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.622 Running I/O for 1 seconds...[2024-07-13 06:52:55.906396] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:26.622 [2024-07-13 06:52:55.906458] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385861 ] 00:05:26.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.622 [2024-07-13 06:52:55.937383] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.622 [2024-07-13 06:52:55.967527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.622 [2024-07-13 06:52:56.060829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.622 [2024-07-13 06:52:56.060896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.622 [2024-07-13 06:52:56.060941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.622 [2024-07-13 06:52:56.060943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.995 Running I/O for 1 seconds... 00:05:27.995 lcore 0: 236991 00:05:27.995 lcore 1: 236993 00:05:27.995 lcore 2: 236993 00:05:27.995 lcore 3: 236991 00:05:27.995 done. 00:05:27.995 00:05:27.995 real 0m1.250s 00:05:27.995 user 0m4.157s 00:05:27.995 sys 0m0.088s 00:05:27.995 06:52:57 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.995 06:52:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.995 ************************************ 00:05:27.995 END TEST event_perf 00:05:27.995 ************************************ 00:05:27.995 06:52:57 event -- common/autotest_common.sh@1142 -- # return 0 00:05:27.995 06:52:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.995 06:52:57 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:27.995 06:52:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.995 06:52:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.995 ************************************ 00:05:27.995 START TEST event_reactor 00:05:27.995 ************************************ 00:05:27.995 06:52:57 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.995 [2024-07-13 06:52:57.201628] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:27.995 [2024-07-13 06:52:57.201680] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386018 ] 00:05:27.995 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.995 [2024-07-13 06:52:57.232158] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.995 [2024-07-13 06:52:57.261988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.995 [2024-07-13 06:52:57.357516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.369 test_start 00:05:29.369 oneshot 00:05:29.369 tick 100 00:05:29.369 tick 100 00:05:29.369 tick 250 00:05:29.369 tick 100 00:05:29.369 tick 100 00:05:29.369 tick 100 00:05:29.369 tick 250 00:05:29.369 tick 500 00:05:29.369 tick 100 00:05:29.369 tick 100 00:05:29.369 tick 250 00:05:29.369 tick 100 00:05:29.369 tick 100 00:05:29.369 test_end 00:05:29.369 00:05:29.369 real 0m1.241s 00:05:29.369 user 0m1.157s 00:05:29.369 sys 0m0.080s 00:05:29.369 06:52:58 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.369 06:52:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:29.369 ************************************ 00:05:29.369 END TEST event_reactor 00:05:29.369 ************************************ 00:05:29.369 06:52:58 event -- common/autotest_common.sh@1142 -- # return 0 00:05:29.369 06:52:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.369 06:52:58 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:29.369 06:52:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.369 06:52:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.369 ************************************ 00:05:29.369 START TEST event_reactor_perf 00:05:29.369 ************************************ 00:05:29.369 06:52:58 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.369 [2024-07-13 06:52:58.490054] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:29.369 [2024-07-13 06:52:58.490116] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386185 ] 00:05:29.369 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.369 [2024-07-13 06:52:58.522635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:29.369 [2024-07-13 06:52:58.552408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.369 [2024-07-13 06:52:58.646176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.306 test_start 00:05:30.306 test_end 00:05:30.306 Performance: 358997 events per second 00:05:30.306 00:05:30.306 real 0m1.251s 00:05:30.306 user 0m1.157s 00:05:30.306 sys 0m0.089s 00:05:30.306 06:52:59 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.306 06:52:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.306 ************************************ 00:05:30.306 END TEST event_reactor_perf 00:05:30.306 ************************************ 00:05:30.306 06:52:59 event -- common/autotest_common.sh@1142 -- # return 0 00:05:30.306 06:52:59 event -- event/event.sh@49 -- # uname -s 00:05:30.306 06:52:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:30.306 06:52:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.306 06:52:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.306 06:52:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.306 06:52:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.565 ************************************ 00:05:30.565 START TEST event_scheduler 00:05:30.565 ************************************ 00:05:30.565 06:52:59 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.565 * Looking for test storage... 00:05:30.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:30.565 06:52:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.565 06:52:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1386367 00:05:30.565 06:52:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.565 06:52:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.565 06:52:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1386367 00:05:30.565 06:52:59 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1386367 ']' 00:05:30.565 06:52:59 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.565 06:52:59 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.565 06:52:59 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.565 06:52:59 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.565 06:52:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.565 [2024-07-13 06:52:59.868243] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:30.565 [2024-07-13 06:52:59.868320] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386367 ] 00:05:30.565 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.565 [2024-07-13 06:52:59.899735] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.565 [2024-07-13 06:52:59.926056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.565 [2024-07-13 06:53:00.018142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.565 [2024-07-13 06:53:00.018162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.565 [2024-07-13 06:53:00.018209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.565 [2024-07-13 06:53:00.018212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:30.823 06:53:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.823 [2024-07-13 06:53:00.083193] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:30.823 [2024-07-13 06:53:00.083224] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.823 [2024-07-13 06:53:00.083241] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.823 [2024-07-13 06:53:00.083252] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.823 [2024-07-13 06:53:00.083262] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.823 06:53:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.823 [2024-07-13 06:53:00.182599] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.823 06:53:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.823 06:53:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.823 ************************************ 00:05:30.823 START TEST scheduler_create_thread 00:05:30.823 ************************************ 00:05:30.823 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:30.823 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.823 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.823 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.823 2 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.824 3 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.824 4 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.824 5 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.824 6 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.824 7 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.824 8 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.824 9 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.824 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.082 10 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.082 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.648 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.648 00:05:31.648 real 0m0.590s 00:05:31.648 user 0m0.011s 00:05:31.648 sys 0m0.002s 00:05:31.648 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.648 06:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.648 ************************************ 00:05:31.648 END TEST scheduler_create_thread 00:05:31.648 ************************************ 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:31.648 06:53:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:31.648 06:53:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1386367 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1386367 ']' 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1386367 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1386367 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1386367' 00:05:31.648 killing process with pid 1386367 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1386367 00:05:31.648 06:53:00 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1386367 00:05:31.906 [2024-07-13 06:53:01.278848] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:32.164 00:05:32.164 real 0m1.709s 00:05:32.164 user 0m2.228s 00:05:32.164 sys 0m0.303s 00:05:32.164 06:53:01 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.164 06:53:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.164 ************************************ 00:05:32.164 END TEST event_scheduler 00:05:32.164 ************************************ 00:05:32.164 06:53:01 event -- common/autotest_common.sh@1142 -- # return 0 00:05:32.164 06:53:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:32.164 06:53:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:32.164 06:53:01 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.164 06:53:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.164 06:53:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.164 ************************************ 00:05:32.164 START TEST app_repeat 00:05:32.164 ************************************ 00:05:32.164 06:53:01 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1386671 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1386671' 00:05:32.164 Process app_repeat pid: 1386671 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:32.164 spdk_app_start Round 0 00:05:32.164 06:53:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1386671 /var/tmp/spdk-nbd.sock 00:05:32.164 06:53:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1386671 ']' 00:05:32.164 06:53:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.164 06:53:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.164 06:53:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.164 06:53:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.164 06:53:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.164 [2024-07-13 06:53:01.566761] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:32.164 [2024-07-13 06:53:01.566826] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386671 ] 00:05:32.164 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.164 [2024-07-13 06:53:01.599278] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:32.423 [2024-07-13 06:53:01.632173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.423 [2024-07-13 06:53:01.724137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.423 [2024-07-13 06:53:01.724142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.423 06:53:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.423 06:53:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:32.423 06:53:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.681 Malloc0 00:05:32.681 06:53:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.939 Malloc1 00:05:32.939 06:53:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.939 06:53:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.939 06:53:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.939 06:53:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.939 06:53:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.939 06:53:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.940 06:53:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.940 06:53:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.940 06:53:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.940 06:53:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.940 06:53:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.940 06:53:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.940 06:53:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.940 06:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.940 06:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.940 06:53:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.198 /dev/nbd0 00:05:33.198 06:53:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.198 06:53:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.198 06:53:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:33.198 06:53:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:33.198 06:53:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:33.198 06:53:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:33.198 06:53:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:33.198 06:53:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:33.198 06:53:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:33.198 06:53:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:33.198 06:53:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.456 1+0 records in 00:05:33.456 1+0 records out 00:05:33.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160079 s, 25.6 MB/s 00:05:33.456 06:53:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.456 06:53:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:33.456 06:53:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.456 06:53:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:33.456 06:53:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:33.456 06:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.456 06:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.456 06:53:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.456 /dev/nbd1 00:05:33.712 06:53:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.712 06:53:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.712 1+0 records in 00:05:33.712 1+0 records out 00:05:33.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208816 s, 19.6 MB/s 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:33.712 06:53:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:33.712 06:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.712 06:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.713 06:53:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.713 06:53:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.713 06:53:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.969 06:53:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.969 { 00:05:33.969 "nbd_device": "/dev/nbd0", 00:05:33.969 "bdev_name": "Malloc0" 00:05:33.969 }, 00:05:33.969 { 00:05:33.969 "nbd_device": "/dev/nbd1", 00:05:33.970 "bdev_name": "Malloc1" 00:05:33.970 } 00:05:33.970 ]' 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.970 { 00:05:33.970 "nbd_device": "/dev/nbd0", 00:05:33.970 "bdev_name": "Malloc0" 00:05:33.970 }, 00:05:33.970 { 00:05:33.970 "nbd_device": "/dev/nbd1", 00:05:33.970 "bdev_name": "Malloc1" 00:05:33.970 } 00:05:33.970 ]' 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.970 /dev/nbd1' 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.970 /dev/nbd1' 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.970 256+0 records in 00:05:33.970 256+0 records out 00:05:33.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496377 s, 211 MB/s 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.970 256+0 records in 00:05:33.970 256+0 records out 00:05:33.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235587 s, 44.5 MB/s 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.970 256+0 records in 00:05:33.970 256+0 records out 00:05:33.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223114 s, 47.0 MB/s 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.970 06:53:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.226 06:53:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.226 06:53:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.226 06:53:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.226 06:53:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.226 06:53:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.226 06:53:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.226 06:53:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.226 06:53:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.226 06:53:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.227 06:53:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.483 06:53:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.741 06:53:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.741 06:53:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.999 06:53:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.256 [2024-07-13 06:53:04.635503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.513 [2024-07-13 06:53:04.726121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.513 [2024-07-13 06:53:04.726125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.514 [2024-07-13 06:53:04.784109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.514 [2024-07-13 06:53:04.784176] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.032 06:53:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.032 06:53:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:38.032 spdk_app_start Round 1 00:05:38.032 06:53:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1386671 /var/tmp/spdk-nbd.sock 00:05:38.032 06:53:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1386671 ']' 00:05:38.032 06:53:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.032 06:53:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.032 06:53:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.032 06:53:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.032 06:53:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.290 06:53:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.290 06:53:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:38.290 06:53:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.547 Malloc0 00:05:38.547 06:53:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.805 Malloc1 00:05:38.805 06:53:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.805 06:53:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.063 /dev/nbd0 00:05:39.063 06:53:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.063 06:53:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.063 1+0 records in 00:05:39.063 1+0 records out 00:05:39.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147417 s, 27.8 MB/s 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.063 06:53:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:39.063 06:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.063 06:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.063 06:53:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.320 /dev/nbd1 00:05:39.320 06:53:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.320 06:53:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.320 1+0 records in 00:05:39.320 1+0 records out 00:05:39.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016803 s, 24.4 MB/s 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.320 06:53:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:39.320 06:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.320 06:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.320 06:53:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.320 06:53:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.320 06:53:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.578 06:53:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.578 { 00:05:39.578 "nbd_device": "/dev/nbd0", 00:05:39.578 "bdev_name": "Malloc0" 00:05:39.578 }, 00:05:39.578 { 00:05:39.578 "nbd_device": "/dev/nbd1", 00:05:39.578 "bdev_name": "Malloc1" 00:05:39.578 } 00:05:39.578 ]' 00:05:39.578 06:53:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.578 { 00:05:39.578 "nbd_device": "/dev/nbd0", 00:05:39.578 "bdev_name": "Malloc0" 00:05:39.578 }, 00:05:39.578 { 00:05:39.578 "nbd_device": "/dev/nbd1", 00:05:39.578 "bdev_name": "Malloc1" 00:05:39.578 } 00:05:39.578 ]' 00:05:39.578 06:53:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.578 /dev/nbd1' 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.578 /dev/nbd1' 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.578 256+0 records in 00:05:39.578 256+0 records out 00:05:39.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501869 s, 209 MB/s 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.578 06:53:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.836 256+0 records in 00:05:39.836 256+0 records out 00:05:39.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204948 s, 51.2 MB/s 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.836 256+0 records in 00:05:39.836 256+0 records out 00:05:39.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241361 s, 43.4 MB/s 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.836 06:53:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.093 06:53:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.093 06:53:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.093 06:53:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.093 06:53:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.093 06:53:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.093 06:53:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.093 06:53:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.093 06:53:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.093 06:53:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.093 06:53:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.350 06:53:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.615 06:53:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.615 06:53:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.872 06:53:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.129 [2024-07-13 06:53:10.426957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.129 [2024-07-13 06:53:10.519251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.129 [2024-07-13 06:53:10.519255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.129 [2024-07-13 06:53:10.582014] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.129 [2024-07-13 06:53:10.582083] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.430 06:53:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.430 06:53:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:44.430 spdk_app_start Round 2 00:05:44.430 06:53:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1386671 /var/tmp/spdk-nbd.sock 00:05:44.430 06:53:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1386671 ']' 00:05:44.430 06:53:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.430 06:53:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.430 06:53:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.430 06:53:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.430 06:53:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.430 06:53:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.430 06:53:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:44.430 06:53:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.430 Malloc0 00:05:44.430 06:53:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.688 Malloc1 00:05:44.688 06:53:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.688 06:53:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.947 /dev/nbd0 00:05:44.947 06:53:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.947 06:53:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.947 1+0 records in 00:05:44.947 1+0 records out 00:05:44.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194263 s, 21.1 MB/s 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:44.947 06:53:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:44.947 06:53:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.947 06:53:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.947 06:53:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.205 /dev/nbd1 00:05:45.205 06:53:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.205 06:53:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.205 1+0 records in 00:05:45.205 1+0 records out 00:05:45.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184413 s, 22.2 MB/s 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.205 06:53:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:45.205 06:53:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.205 06:53:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.205 06:53:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.205 06:53:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.205 06:53:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.464 { 00:05:45.464 "nbd_device": "/dev/nbd0", 00:05:45.464 "bdev_name": "Malloc0" 00:05:45.464 }, 00:05:45.464 { 00:05:45.464 "nbd_device": "/dev/nbd1", 00:05:45.464 "bdev_name": "Malloc1" 00:05:45.464 } 00:05:45.464 ]' 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.464 { 00:05:45.464 "nbd_device": "/dev/nbd0", 00:05:45.464 "bdev_name": "Malloc0" 00:05:45.464 }, 00:05:45.464 { 00:05:45.464 "nbd_device": "/dev/nbd1", 00:05:45.464 "bdev_name": "Malloc1" 00:05:45.464 } 00:05:45.464 ]' 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.464 /dev/nbd1' 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.464 /dev/nbd1' 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.464 06:53:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.465 256+0 records in 00:05:45.465 256+0 records out 00:05:45.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00424901 s, 247 MB/s 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.465 256+0 records in 00:05:45.465 256+0 records out 00:05:45.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232205 s, 45.2 MB/s 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.465 256+0 records in 00:05:45.465 256+0 records out 00:05:45.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238637 s, 43.9 MB/s 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.465 06:53:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.032 06:53:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.290 06:53:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.290 06:53:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.548 06:53:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.807 [2024-07-13 06:53:16.209183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.065 [2024-07-13 06:53:16.299622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.065 [2024-07-13 06:53:16.299626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.065 [2024-07-13 06:53:16.355148] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.065 [2024-07-13 06:53:16.355231] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.593 06:53:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1386671 /var/tmp/spdk-nbd.sock 00:05:49.593 06:53:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1386671 ']' 00:05:49.593 06:53:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.593 06:53:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.593 06:53:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.593 06:53:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.593 06:53:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:49.850 06:53:19 event.app_repeat -- event/event.sh@39 -- # killprocess 1386671 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1386671 ']' 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1386671 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1386671 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1386671' 00:05:49.850 killing process with pid 1386671 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1386671 00:05:49.850 06:53:19 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1386671 00:05:50.107 spdk_app_start is called in Round 0. 00:05:50.107 Shutdown signal received, stop current app iteration 00:05:50.107 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:05:50.107 spdk_app_start is called in Round 1. 00:05:50.107 Shutdown signal received, stop current app iteration 00:05:50.107 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:05:50.107 spdk_app_start is called in Round 2. 00:05:50.107 Shutdown signal received, stop current app iteration 00:05:50.107 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:05:50.107 spdk_app_start is called in Round 3. 00:05:50.107 Shutdown signal received, stop current app iteration 00:05:50.107 06:53:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:50.107 06:53:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:50.107 00:05:50.107 real 0m17.930s 00:05:50.107 user 0m38.990s 00:05:50.107 sys 0m3.238s 00:05:50.107 06:53:19 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.107 06:53:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.107 ************************************ 00:05:50.107 END TEST app_repeat 00:05:50.108 ************************************ 00:05:50.108 06:53:19 event -- common/autotest_common.sh@1142 -- # return 0 00:05:50.108 06:53:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:50.108 06:53:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:50.108 06:53:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.108 06:53:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.108 06:53:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.108 ************************************ 00:05:50.108 START TEST cpu_locks 00:05:50.108 ************************************ 00:05:50.108 06:53:19 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:50.365 * Looking for test storage... 00:05:50.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:50.365 06:53:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:50.365 06:53:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:50.365 06:53:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:50.365 06:53:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:50.365 06:53:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.365 06:53:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.365 06:53:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.365 ************************************ 00:05:50.365 START TEST default_locks 00:05:50.365 ************************************ 00:05:50.365 06:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:50.365 06:53:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1389022 00:05:50.365 06:53:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.365 06:53:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1389022 00:05:50.365 06:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1389022 ']' 00:05:50.365 06:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.365 06:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.365 06:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.365 06:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.365 06:53:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.365 [2024-07-13 06:53:19.652760] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:50.365 [2024-07-13 06:53:19.652842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389022 ] 00:05:50.365 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.365 [2024-07-13 06:53:19.685347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.365 [2024-07-13 06:53:19.711957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.365 [2024-07-13 06:53:19.796727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.623 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.623 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:50.623 06:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1389022 00:05:50.623 06:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1389022 00:05:50.623 06:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.188 lslocks: write error 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1389022 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1389022 ']' 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1389022 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1389022 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1389022' 00:05:51.188 killing process with pid 1389022 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1389022 00:05:51.188 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1389022 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1389022 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1389022 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1389022 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1389022 ']' 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1389022) - No such process 00:05:51.446 ERROR: process (pid: 1389022) is no longer running 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.446 00:05:51.446 real 0m1.212s 00:05:51.446 user 0m1.129s 00:05:51.446 sys 0m0.556s 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.446 06:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.446 ************************************ 00:05:51.446 END TEST default_locks 00:05:51.446 ************************************ 00:05:51.446 06:53:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:51.446 06:53:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:51.446 06:53:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.446 06:53:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.446 06:53:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.446 ************************************ 00:05:51.446 START TEST default_locks_via_rpc 00:05:51.446 ************************************ 00:05:51.446 06:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:51.446 06:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1389190 00:05:51.446 06:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.446 06:53:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1389190 00:05:51.446 06:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1389190 ']' 00:05:51.446 06:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.446 06:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.446 06:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.446 06:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.446 06:53:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.704 [2024-07-13 06:53:20.914980] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:51.704 [2024-07-13 06:53:20.915061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389190 ] 00:05:51.704 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.704 [2024-07-13 06:53:20.947551] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:51.704 [2024-07-13 06:53:20.974435] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.704 [2024-07-13 06:53:21.063109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.961 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.961 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:51.961 06:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:51.961 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.961 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.961 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1389190 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1389190 00:05:51.962 06:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1389190 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1389190 ']' 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1389190 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1389190 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1389190' 00:05:52.219 killing process with pid 1389190 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1389190 00:05:52.219 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1389190 00:05:52.782 00:05:52.782 real 0m1.128s 00:05:52.782 user 0m1.064s 00:05:52.782 sys 0m0.518s 00:05:52.782 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.782 06:53:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.782 ************************************ 00:05:52.782 END TEST default_locks_via_rpc 00:05:52.782 ************************************ 00:05:52.782 06:53:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:52.782 06:53:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:52.782 06:53:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.782 06:53:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.782 06:53:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.782 ************************************ 00:05:52.782 START TEST non_locking_app_on_locked_coremask 00:05:52.782 ************************************ 00:05:52.782 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:52.782 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1389351 00:05:52.782 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.782 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1389351 /var/tmp/spdk.sock 00:05:52.782 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1389351 ']' 00:05:52.782 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.782 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.782 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.782 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.782 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.782 [2024-07-13 06:53:22.090340] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:52.782 [2024-07-13 06:53:22.090423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389351 ] 00:05:52.782 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.782 [2024-07-13 06:53:22.126856] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.782 [2024-07-13 06:53:22.157143] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.038 [2024-07-13 06:53:22.247365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1389356 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1389356 /var/tmp/spdk2.sock 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1389356 ']' 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.295 06:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.295 [2024-07-13 06:53:22.560100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:53.295 [2024-07-13 06:53:22.560190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389356 ] 00:05:53.295 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.295 [2024-07-13 06:53:22.600576] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.295 [2024-07-13 06:53:22.658635] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.295 [2024-07-13 06:53:22.658667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.551 [2024-07-13 06:53:22.842779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.114 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.114 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:54.114 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1389351 00:05:54.114 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1389351 00:05:54.114 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.676 lslocks: write error 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1389351 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1389351 ']' 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1389351 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1389351 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1389351' 00:05:54.676 killing process with pid 1389351 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1389351 00:05:54.676 06:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1389351 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1389356 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1389356 ']' 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1389356 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1389356 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1389356' 00:05:55.606 killing process with pid 1389356 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1389356 00:05:55.606 06:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1389356 00:05:55.864 00:05:55.864 real 0m3.137s 00:05:55.864 user 0m3.273s 00:05:55.864 sys 0m1.021s 00:05:55.864 06:53:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.864 06:53:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.864 ************************************ 00:05:55.864 END TEST non_locking_app_on_locked_coremask 00:05:55.864 ************************************ 00:05:55.864 06:53:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:55.864 06:53:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:55.864 06:53:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.864 06:53:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.864 06:53:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.864 ************************************ 00:05:55.864 START TEST locking_app_on_unlocked_coremask 00:05:55.864 ************************************ 00:05:55.864 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:55.864 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1389780 00:05:55.864 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:55.864 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1389780 /var/tmp/spdk.sock 00:05:55.864 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1389780 ']' 00:05:55.864 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.864 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.864 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.864 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.864 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.864 [2024-07-13 06:53:25.277209] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:55.864 [2024-07-13 06:53:25.277299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389780 ] 00:05:55.864 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.864 [2024-07-13 06:53:25.310162] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.122 [2024-07-13 06:53:25.337992] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.122 [2024-07-13 06:53:25.338019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.122 [2024-07-13 06:53:25.431511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.380 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.380 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:56.380 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1389794 00:05:56.380 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1389794 /var/tmp/spdk2.sock 00:05:56.380 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1389794 ']' 00:05:56.380 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.380 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.380 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.380 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.381 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.381 06:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.381 [2024-07-13 06:53:25.746911] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:56.381 [2024-07-13 06:53:25.747011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389794 ] 00:05:56.381 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.381 [2024-07-13 06:53:25.780661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.638 [2024-07-13 06:53:25.844709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.638 [2024-07-13 06:53:26.028713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.575 06:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.575 06:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.575 06:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1389794 00:05:57.575 06:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1389794 00:05:57.575 06:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.141 lslocks: write error 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1389780 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1389780 ']' 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1389780 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1389780 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1389780' 00:05:58.141 killing process with pid 1389780 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1389780 00:05:58.141 06:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1389780 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1389794 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1389794 ']' 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1389794 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1389794 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1389794' 00:05:59.072 killing process with pid 1389794 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1389794 00:05:59.072 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1389794 00:05:59.330 00:05:59.330 real 0m3.414s 00:05:59.330 user 0m3.568s 00:05:59.330 sys 0m1.144s 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 ************************************ 00:05:59.330 END TEST locking_app_on_unlocked_coremask 00:05:59.330 ************************************ 00:05:59.330 06:53:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.330 06:53:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:59.330 06:53:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.330 06:53:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.330 06:53:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 ************************************ 00:05:59.330 START TEST locking_app_on_locked_coremask 00:05:59.330 ************************************ 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1390225 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1390225 /var/tmp/spdk.sock 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1390225 ']' 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.330 06:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 [2024-07-13 06:53:28.734430] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:59.330 [2024-07-13 06:53:28.734524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390225 ] 00:05:59.330 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.330 [2024-07-13 06:53:28.765533] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.587 [2024-07-13 06:53:28.797549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.587 [2024-07-13 06:53:28.886828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1390231 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1390231 /var/tmp/spdk2.sock 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1390231 /var/tmp/spdk2.sock 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1390231 /var/tmp/spdk2.sock 00:05:59.844 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1390231 ']' 00:05:59.845 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.845 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.845 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.845 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.845 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.845 [2024-07-13 06:53:29.196683] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:59.845 [2024-07-13 06:53:29.196781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390231 ] 00:05:59.845 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.845 [2024-07-13 06:53:29.230643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.845 [2024-07-13 06:53:29.294644] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1390225 has claimed it. 00:05:59.845 [2024-07-13 06:53:29.294699] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1390231) - No such process 00:06:00.776 ERROR: process (pid: 1390231) is no longer running 00:06:00.776 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.776 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:00.776 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:00.776 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.776 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.776 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.776 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1390225 00:06:00.776 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1390225 00:06:00.776 06:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.033 lslocks: write error 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1390225 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1390225 ']' 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1390225 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1390225 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1390225' 00:06:01.033 killing process with pid 1390225 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1390225 00:06:01.033 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1390225 00:06:01.598 00:06:01.598 real 0m2.109s 00:06:01.598 user 0m2.262s 00:06:01.598 sys 0m0.663s 00:06:01.598 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.598 06:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.598 ************************************ 00:06:01.598 END TEST locking_app_on_locked_coremask 00:06:01.598 ************************************ 00:06:01.598 06:53:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.598 06:53:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:01.598 06:53:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.598 06:53:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.598 06:53:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.598 ************************************ 00:06:01.598 START TEST locking_overlapped_coremask 00:06:01.598 ************************************ 00:06:01.598 06:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:01.598 06:53:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1390519 00:06:01.598 06:53:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:01.598 06:53:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1390519 /var/tmp/spdk.sock 00:06:01.598 06:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1390519 ']' 00:06:01.598 06:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.598 06:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.598 06:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.598 06:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.598 06:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.598 [2024-07-13 06:53:30.895070] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:01.598 [2024-07-13 06:53:30.895168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390519 ] 00:06:01.598 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.598 [2024-07-13 06:53:30.927156] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.598 [2024-07-13 06:53:30.955209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.598 [2024-07-13 06:53:31.041060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.598 [2024-07-13 06:53:31.041114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.598 [2024-07-13 06:53:31.041117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1390529 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1390529 /var/tmp/spdk2.sock 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1390529 /var/tmp/spdk2.sock 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1390529 /var/tmp/spdk2.sock 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1390529 ']' 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.856 06:53:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.113 [2024-07-13 06:53:31.346894] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:02.113 [2024-07-13 06:53:31.346990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390529 ] 00:06:02.113 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.113 [2024-07-13 06:53:31.381947] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.113 [2024-07-13 06:53:31.436351] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1390519 has claimed it. 00:06:02.113 [2024-07-13 06:53:31.436401] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:02.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1390529) - No such process 00:06:02.677 ERROR: process (pid: 1390529) is no longer running 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1390519 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1390519 ']' 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1390519 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1390519 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1390519' 00:06:02.677 killing process with pid 1390519 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1390519 00:06:02.677 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1390519 00:06:03.242 00:06:03.242 real 0m1.615s 00:06:03.242 user 0m4.360s 00:06:03.242 sys 0m0.443s 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.242 ************************************ 00:06:03.242 END TEST locking_overlapped_coremask 00:06:03.242 ************************************ 00:06:03.242 06:53:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:03.242 06:53:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:03.242 06:53:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.242 06:53:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.242 06:53:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.242 ************************************ 00:06:03.242 START TEST locking_overlapped_coremask_via_rpc 00:06:03.242 ************************************ 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1390707 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1390707 /var/tmp/spdk.sock 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1390707 ']' 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.242 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.242 [2024-07-13 06:53:32.556344] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:03.242 [2024-07-13 06:53:32.556446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390707 ] 00:06:03.242 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.242 [2024-07-13 06:53:32.588537] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.242 [2024-07-13 06:53:32.620273] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.242 [2024-07-13 06:53:32.620304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.506 [2024-07-13 06:53:32.713326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.506 [2024-07-13 06:53:32.713380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.506 [2024-07-13 06:53:32.713398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1390723 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1390723 /var/tmp/spdk2.sock 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1390723 ']' 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.764 06:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.764 [2024-07-13 06:53:33.018906] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:03.764 [2024-07-13 06:53:33.018991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390723 ] 00:06:03.764 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.764 [2024-07-13 06:53:33.057013] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.764 [2024-07-13 06:53:33.112479] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.764 [2024-07-13 06:53:33.112506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.021 [2024-07-13 06:53:33.288011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.021 [2024-07-13 06:53:33.291924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:04.021 [2024-07-13 06:53:33.291926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.585 [2024-07-13 06:53:33.975975] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1390707 has claimed it. 00:06:04.585 request: 00:06:04.585 { 00:06:04.585 "method": "framework_enable_cpumask_locks", 00:06:04.585 "req_id": 1 00:06:04.585 } 00:06:04.585 Got JSON-RPC error response 00:06:04.585 response: 00:06:04.585 { 00:06:04.585 "code": -32603, 00:06:04.585 "message": "Failed to claim CPU core: 2" 00:06:04.585 } 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1390707 /var/tmp/spdk.sock 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1390707 ']' 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.585 06:53:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.842 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.842 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:04.842 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1390723 /var/tmp/spdk2.sock 00:06:04.842 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1390723 ']' 00:06:04.842 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.842 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.842 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.842 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.842 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.099 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.099 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:05.099 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:05.099 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.099 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.099 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.099 00:06:05.099 real 0m2.003s 00:06:05.099 user 0m1.036s 00:06:05.099 sys 0m0.184s 00:06:05.099 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.099 06:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.099 ************************************ 00:06:05.099 END TEST locking_overlapped_coremask_via_rpc 00:06:05.099 ************************************ 00:06:05.099 06:53:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:05.099 06:53:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:05.099 06:53:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1390707 ]] 00:06:05.099 06:53:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1390707 00:06:05.099 06:53:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1390707 ']' 00:06:05.099 06:53:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1390707 00:06:05.099 06:53:34 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:05.099 06:53:34 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.099 06:53:34 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1390707 00:06:05.357 06:53:34 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.357 06:53:34 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.357 06:53:34 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1390707' 00:06:05.357 killing process with pid 1390707 00:06:05.357 06:53:34 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1390707 00:06:05.357 06:53:34 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1390707 00:06:05.615 06:53:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1390723 ]] 00:06:05.615 06:53:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1390723 00:06:05.615 06:53:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1390723 ']' 00:06:05.615 06:53:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1390723 00:06:05.615 06:53:34 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:05.615 06:53:34 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.615 06:53:34 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1390723 00:06:05.615 06:53:34 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:05.615 06:53:34 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:05.615 06:53:34 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1390723' 00:06:05.615 killing process with pid 1390723 00:06:05.615 06:53:34 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1390723 00:06:05.615 06:53:35 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1390723 00:06:06.182 06:53:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.182 06:53:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:06.182 06:53:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1390707 ]] 00:06:06.182 06:53:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1390707 00:06:06.182 06:53:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1390707 ']' 00:06:06.182 06:53:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1390707 00:06:06.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1390707) - No such process 00:06:06.182 06:53:35 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1390707 is not found' 00:06:06.182 Process with pid 1390707 is not found 00:06:06.182 06:53:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1390723 ]] 00:06:06.182 06:53:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1390723 00:06:06.182 06:53:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1390723 ']' 00:06:06.182 06:53:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1390723 00:06:06.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1390723) - No such process 00:06:06.182 06:53:35 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1390723 is not found' 00:06:06.182 Process with pid 1390723 is not found 00:06:06.182 06:53:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.182 00:06:06.182 real 0m15.876s 00:06:06.182 user 0m27.627s 00:06:06.182 sys 0m5.415s 00:06:06.182 06:53:35 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.182 06:53:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.182 ************************************ 00:06:06.182 END TEST cpu_locks 00:06:06.182 ************************************ 00:06:06.182 06:53:35 event -- common/autotest_common.sh@1142 -- # return 0 00:06:06.182 00:06:06.182 real 0m39.599s 00:06:06.182 user 1m15.448s 00:06:06.182 sys 0m9.446s 00:06:06.182 06:53:35 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.182 06:53:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.182 ************************************ 00:06:06.182 END TEST event 00:06:06.182 ************************************ 00:06:06.182 06:53:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.182 06:53:35 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.182 06:53:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.182 06:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.182 06:53:35 -- common/autotest_common.sh@10 -- # set +x 00:06:06.182 ************************************ 00:06:06.182 START TEST thread 00:06:06.182 ************************************ 00:06:06.182 06:53:35 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.182 * Looking for test storage... 00:06:06.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:06.182 06:53:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.182 06:53:35 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:06.182 06:53:35 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.182 06:53:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.182 ************************************ 00:06:06.182 START TEST thread_poller_perf 00:06:06.182 ************************************ 00:06:06.182 06:53:35 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.182 [2024-07-13 06:53:35.542728] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:06.182 [2024-07-13 06:53:35.542782] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391205 ] 00:06:06.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.182 [2024-07-13 06:53:35.573407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.182 [2024-07-13 06:53:35.599599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.440 [2024-07-13 06:53:35.689401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.440 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:07.374 ====================================== 00:06:07.374 busy:2712528777 (cyc) 00:06:07.374 total_run_count: 301000 00:06:07.374 tsc_hz: 2700000000 (cyc) 00:06:07.374 ====================================== 00:06:07.374 poller_cost: 9011 (cyc), 3337 (nsec) 00:06:07.374 00:06:07.374 real 0m1.246s 00:06:07.374 user 0m1.160s 00:06:07.374 sys 0m0.081s 00:06:07.374 06:53:36 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.374 06:53:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.374 ************************************ 00:06:07.374 END TEST thread_poller_perf 00:06:07.374 ************************************ 00:06:07.374 06:53:36 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:07.374 06:53:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.374 06:53:36 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:07.374 06:53:36 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.374 06:53:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.374 ************************************ 00:06:07.374 START TEST thread_poller_perf 00:06:07.374 ************************************ 00:06:07.374 06:53:36 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.632 [2024-07-13 06:53:36.838583] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:07.632 [2024-07-13 06:53:36.838654] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391364 ] 00:06:07.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.632 [2024-07-13 06:53:36.871069] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.632 [2024-07-13 06:53:36.901361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.632 [2024-07-13 06:53:36.994489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.632 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:09.003 ====================================== 00:06:09.003 busy:2702635116 (cyc) 00:06:09.003 total_run_count: 3868000 00:06:09.003 tsc_hz: 2700000000 (cyc) 00:06:09.003 ====================================== 00:06:09.003 poller_cost: 698 (cyc), 258 (nsec) 00:06:09.003 00:06:09.003 real 0m1.253s 00:06:09.003 user 0m1.165s 00:06:09.003 sys 0m0.082s 00:06:09.003 06:53:38 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.003 06:53:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.003 ************************************ 00:06:09.003 END TEST thread_poller_perf 00:06:09.003 ************************************ 00:06:09.003 06:53:38 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:09.003 06:53:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:09.003 00:06:09.003 real 0m2.637s 00:06:09.003 user 0m2.388s 00:06:09.003 sys 0m0.249s 00:06:09.003 06:53:38 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.003 06:53:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.003 ************************************ 00:06:09.003 END TEST thread 00:06:09.003 ************************************ 00:06:09.003 06:53:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:09.003 06:53:38 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:09.003 06:53:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.003 06:53:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.003 06:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:09.003 ************************************ 00:06:09.003 START TEST accel 00:06:09.003 ************************************ 00:06:09.003 06:53:38 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:09.003 * Looking for test storage... 00:06:09.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:09.003 06:53:38 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:09.004 06:53:38 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:09.004 06:53:38 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.004 06:53:38 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1391557 00:06:09.004 06:53:38 accel -- accel/accel.sh@63 -- # waitforlisten 1391557 00:06:09.004 06:53:38 accel -- common/autotest_common.sh@829 -- # '[' -z 1391557 ']' 00:06:09.004 06:53:38 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:09.004 06:53:38 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.004 06:53:38 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:09.004 06:53:38 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.004 06:53:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.004 06:53:38 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.004 06:53:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.004 06:53:38 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.004 06:53:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.004 06:53:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.004 06:53:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.004 06:53:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.004 06:53:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:09.004 06:53:38 accel -- accel/accel.sh@41 -- # jq -r . 00:06:09.004 [2024-07-13 06:53:38.250494] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:09.004 [2024-07-13 06:53:38.250564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391557 ] 00:06:09.004 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.004 [2024-07-13 06:53:38.281273] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.004 [2024-07-13 06:53:38.311247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.004 [2024-07-13 06:53:38.400784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.263 06:53:38 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.263 06:53:38 accel -- common/autotest_common.sh@862 -- # return 0 00:06:09.263 06:53:38 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:09.263 06:53:38 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:09.263 06:53:38 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:09.263 06:53:38 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:09.263 06:53:38 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:09.263 06:53:38 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:09.263 06:53:38 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.263 06:53:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.263 06:53:38 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:09.263 06:53:38 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.263 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.263 06:53:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.263 06:53:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.264 06:53:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.264 06:53:38 accel -- accel/accel.sh@75 -- # killprocess 1391557 00:06:09.264 06:53:38 accel -- common/autotest_common.sh@948 -- # '[' -z 1391557 ']' 00:06:09.264 06:53:38 accel -- common/autotest_common.sh@952 -- # kill -0 1391557 00:06:09.264 06:53:38 accel -- common/autotest_common.sh@953 -- # uname 00:06:09.264 06:53:38 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.264 06:53:38 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1391557 00:06:09.522 06:53:38 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.522 06:53:38 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.522 06:53:38 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1391557' 00:06:09.522 killing process with pid 1391557 00:06:09.522 06:53:38 accel -- common/autotest_common.sh@967 -- # kill 1391557 00:06:09.522 06:53:38 accel -- common/autotest_common.sh@972 -- # wait 1391557 00:06:09.804 06:53:39 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:09.804 06:53:39 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:09.804 06:53:39 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:09.804 06:53:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.804 06:53:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.804 06:53:39 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:09.804 06:53:39 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:09.804 06:53:39 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:09.804 06:53:39 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.804 06:53:39 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.804 06:53:39 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.804 06:53:39 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.804 06:53:39 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.804 06:53:39 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:09.804 06:53:39 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:09.804 06:53:39 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.804 06:53:39 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:09.804 06:53:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.804 06:53:39 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:09.804 06:53:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:09.804 06:53:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.804 06:53:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.804 ************************************ 00:06:09.804 START TEST accel_missing_filename 00:06:09.804 ************************************ 00:06:09.804 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:09.804 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:09.804 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:09.804 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:09.804 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.804 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:09.804 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.804 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:09.804 06:53:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:09.804 06:53:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:09.804 06:53:39 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.804 06:53:39 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.804 06:53:39 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.804 06:53:39 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.804 06:53:39 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.804 06:53:39 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:09.804 06:53:39 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:10.088 [2024-07-13 06:53:39.244863] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:10.088 [2024-07-13 06:53:39.244962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391725 ] 00:06:10.088 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.088 [2024-07-13 06:53:39.276857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.088 [2024-07-13 06:53:39.306959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.088 [2024-07-13 06:53:39.400739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.088 [2024-07-13 06:53:39.462320] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.346 [2024-07-13 06:53:39.546107] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:10.346 A filename is required. 00:06:10.346 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:10.346 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.346 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:10.346 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:10.346 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:10.346 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.346 00:06:10.346 real 0m0.401s 00:06:10.346 user 0m0.285s 00:06:10.346 sys 0m0.150s 00:06:10.346 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.346 06:53:39 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:10.346 ************************************ 00:06:10.346 END TEST accel_missing_filename 00:06:10.346 ************************************ 00:06:10.346 06:53:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.346 06:53:39 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.346 06:53:39 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:10.346 06:53:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.346 06:53:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.346 ************************************ 00:06:10.346 START TEST accel_compress_verify 00:06:10.346 ************************************ 00:06:10.346 06:53:39 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.346 06:53:39 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:10.346 06:53:39 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.346 06:53:39 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:10.346 06:53:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.346 06:53:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:10.346 06:53:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.346 06:53:39 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.346 06:53:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.346 06:53:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:10.346 06:53:39 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.346 06:53:39 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.346 06:53:39 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.346 06:53:39 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.346 06:53:39 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.346 06:53:39 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:10.346 06:53:39 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:10.346 [2024-07-13 06:53:39.692807] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:10.346 [2024-07-13 06:53:39.692887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391749 ] 00:06:10.346 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.346 [2024-07-13 06:53:39.725531] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.346 [2024-07-13 06:53:39.758171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.604 [2024-07-13 06:53:39.854394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.604 [2024-07-13 06:53:39.916022] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.604 [2024-07-13 06:53:40.003616] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:10.863 00:06:10.863 Compression does not support the verify option, aborting. 00:06:10.863 06:53:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:10.863 06:53:40 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.863 06:53:40 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:10.863 06:53:40 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:10.863 06:53:40 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:10.863 06:53:40 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.863 00:06:10.863 real 0m0.411s 00:06:10.863 user 0m0.301s 00:06:10.863 sys 0m0.141s 00:06:10.863 06:53:40 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.863 06:53:40 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:10.863 ************************************ 00:06:10.863 END TEST accel_compress_verify 00:06:10.863 ************************************ 00:06:10.863 06:53:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.863 06:53:40 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:10.863 06:53:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:10.863 06:53:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.863 06:53:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.863 ************************************ 00:06:10.863 START TEST accel_wrong_workload 00:06:10.863 ************************************ 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:10.863 06:53:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:10.863 06:53:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:10.863 06:53:40 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.863 06:53:40 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.863 06:53:40 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.863 06:53:40 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.863 06:53:40 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.863 06:53:40 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:10.863 06:53:40 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:10.863 Unsupported workload type: foobar 00:06:10.863 [2024-07-13 06:53:40.151929] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:10.863 accel_perf options: 00:06:10.863 [-h help message] 00:06:10.863 [-q queue depth per core] 00:06:10.863 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.863 [-T number of threads per core 00:06:10.863 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.863 [-t time in seconds] 00:06:10.863 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.863 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:10.863 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.863 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.863 [-S for crc32c workload, use this seed value (default 0) 00:06:10.863 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.863 [-f for fill workload, use this BYTE value (default 255) 00:06:10.863 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.863 [-y verify result if this switch is on] 00:06:10.863 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.863 Can be used to spread operations across a wider range of memory. 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.863 00:06:10.863 real 0m0.023s 00:06:10.863 user 0m0.014s 00:06:10.863 sys 0m0.010s 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.863 06:53:40 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:10.863 ************************************ 00:06:10.863 END TEST accel_wrong_workload 00:06:10.863 ************************************ 00:06:10.863 Error: writing output failed: Broken pipe 00:06:10.863 06:53:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.863 06:53:40 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.863 06:53:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:10.863 06:53:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.863 06:53:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.863 ************************************ 00:06:10.863 START TEST accel_negative_buffers 00:06:10.863 ************************************ 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:10.864 06:53:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:10.864 06:53:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:10.864 06:53:40 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.864 06:53:40 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.864 06:53:40 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.864 06:53:40 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.864 06:53:40 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.864 06:53:40 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:10.864 06:53:40 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:10.864 -x option must be non-negative. 00:06:10.864 [2024-07-13 06:53:40.217362] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:10.864 accel_perf options: 00:06:10.864 [-h help message] 00:06:10.864 [-q queue depth per core] 00:06:10.864 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.864 [-T number of threads per core 00:06:10.864 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.864 [-t time in seconds] 00:06:10.864 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.864 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:10.864 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.864 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.864 [-S for crc32c workload, use this seed value (default 0) 00:06:10.864 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.864 [-f for fill workload, use this BYTE value (default 255) 00:06:10.864 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.864 [-y verify result if this switch is on] 00:06:10.864 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.864 Can be used to spread operations across a wider range of memory. 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.864 00:06:10.864 real 0m0.021s 00:06:10.864 user 0m0.015s 00:06:10.864 sys 0m0.006s 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.864 06:53:40 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:10.864 ************************************ 00:06:10.864 END TEST accel_negative_buffers 00:06:10.864 ************************************ 00:06:10.864 Error: writing output failed: Broken pipe 00:06:10.864 06:53:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.864 06:53:40 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:10.864 06:53:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:10.864 06:53:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.864 06:53:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.864 ************************************ 00:06:10.864 START TEST accel_crc32c 00:06:10.864 ************************************ 00:06:10.864 06:53:40 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:10.864 06:53:40 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:10.864 [2024-07-13 06:53:40.283390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:10.864 [2024-07-13 06:53:40.283455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391937 ] 00:06:10.864 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.864 [2024-07-13 06:53:40.315436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.133 [2024-07-13 06:53:40.347746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.133 [2024-07-13 06:53:40.441164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.133 06:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:12.505 06:53:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.505 00:06:12.505 real 0m1.411s 00:06:12.505 user 0m1.260s 00:06:12.505 sys 0m0.152s 00:06:12.505 06:53:41 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.505 06:53:41 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:12.505 ************************************ 00:06:12.505 END TEST accel_crc32c 00:06:12.505 ************************************ 00:06:12.505 06:53:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.505 06:53:41 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:12.505 06:53:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:12.505 06:53:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.505 06:53:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.505 ************************************ 00:06:12.505 START TEST accel_crc32c_C2 00:06:12.505 ************************************ 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.505 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:12.505 [2024-07-13 06:53:41.744952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:12.505 [2024-07-13 06:53:41.745015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392090 ] 00:06:12.505 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.505 [2024-07-13 06:53:41.778086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.505 [2024-07-13 06:53:41.810271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.505 [2024-07-13 06:53:41.901479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.763 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.764 06:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.696 00:06:13.696 real 0m1.409s 00:06:13.696 user 0m1.259s 00:06:13.696 sys 0m0.152s 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.696 06:53:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:13.696 ************************************ 00:06:13.696 END TEST accel_crc32c_C2 00:06:13.696 ************************************ 00:06:13.953 06:53:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.953 06:53:43 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:13.953 06:53:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:13.953 06:53:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.953 06:53:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.953 ************************************ 00:06:13.953 START TEST accel_copy 00:06:13.953 ************************************ 00:06:13.953 06:53:43 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:13.953 06:53:43 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:13.953 [2024-07-13 06:53:43.196633] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:13.953 [2024-07-13 06:53:43.196698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392326 ] 00:06:13.953 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.953 [2024-07-13 06:53:43.229034] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.953 [2024-07-13 06:53:43.259124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.953 [2024-07-13 06:53:43.352189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 06:53:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.142 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:15.143 06:53:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.143 00:06:15.143 real 0m1.407s 00:06:15.143 user 0m1.262s 00:06:15.143 sys 0m0.146s 00:06:15.143 06:53:44 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.143 06:53:44 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:15.143 ************************************ 00:06:15.143 END TEST accel_copy 00:06:15.143 ************************************ 00:06:15.401 06:53:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.401 06:53:44 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.401 06:53:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:15.401 06:53:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.401 06:53:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.401 ************************************ 00:06:15.401 START TEST accel_fill 00:06:15.401 ************************************ 00:06:15.401 06:53:44 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:15.401 06:53:44 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:15.401 [2024-07-13 06:53:44.647725] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:15.401 [2024-07-13 06:53:44.647784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392524 ] 00:06:15.401 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.401 [2024-07-13 06:53:44.678579] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.401 [2024-07-13 06:53:44.705552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.401 [2024-07-13 06:53:44.795876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.659 06:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:16.592 06:53:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.592 00:06:16.592 real 0m1.394s 00:06:16.592 user 0m1.251s 00:06:16.592 sys 0m0.144s 00:06:16.592 06:53:46 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.592 06:53:46 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:16.592 ************************************ 00:06:16.592 END TEST accel_fill 00:06:16.592 ************************************ 00:06:16.850 06:53:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.850 06:53:46 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:16.850 06:53:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.850 06:53:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.850 06:53:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.850 ************************************ 00:06:16.850 START TEST accel_copy_crc32c 00:06:16.850 ************************************ 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:16.850 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:16.850 [2024-07-13 06:53:46.091584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:16.850 [2024-07-13 06:53:46.091639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392682 ] 00:06:16.850 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.850 [2024-07-13 06:53:46.122196] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.850 [2024-07-13 06:53:46.154008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.850 [2024-07-13 06:53:46.245018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.109 06:53:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.046 00:06:18.046 real 0m1.409s 00:06:18.046 user 0m1.265s 00:06:18.046 sys 0m0.146s 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.046 06:53:47 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:18.046 ************************************ 00:06:18.046 END TEST accel_copy_crc32c 00:06:18.046 ************************************ 00:06:18.304 06:53:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.304 06:53:47 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:18.304 06:53:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:18.304 06:53:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.304 06:53:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.304 ************************************ 00:06:18.304 START TEST accel_copy_crc32c_C2 00:06:18.304 ************************************ 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.304 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:18.304 [2024-07-13 06:53:47.550850] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:18.304 [2024-07-13 06:53:47.550993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392835 ] 00:06:18.304 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.304 [2024-07-13 06:53:47.582254] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.304 [2024-07-13 06:53:47.614225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.304 [2024-07-13 06:53:47.706322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.562 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.563 06:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.496 00:06:19.496 real 0m1.393s 00:06:19.496 user 0m1.246s 00:06:19.496 sys 0m0.148s 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.496 06:53:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:19.496 ************************************ 00:06:19.496 END TEST accel_copy_crc32c_C2 00:06:19.496 ************************************ 00:06:19.496 06:53:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.496 06:53:48 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:19.496 06:53:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:19.496 06:53:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.496 06:53:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.754 ************************************ 00:06:19.754 START TEST accel_dualcast 00:06:19.754 ************************************ 00:06:19.754 06:53:48 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:19.754 06:53:48 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:19.754 [2024-07-13 06:53:48.988511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:19.754 [2024-07-13 06:53:48.988578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393112 ] 00:06:19.754 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.754 [2024-07-13 06:53:49.021410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.754 [2024-07-13 06:53:49.051897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.754 [2024-07-13 06:53:49.145155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.754 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.755 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.013 06:53:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:20.947 06:53:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.947 00:06:20.947 real 0m1.401s 00:06:20.947 user 0m1.255s 00:06:20.947 sys 0m0.147s 00:06:20.947 06:53:50 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.947 06:53:50 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:20.947 ************************************ 00:06:20.947 END TEST accel_dualcast 00:06:20.947 ************************************ 00:06:20.947 06:53:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.947 06:53:50 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:20.947 06:53:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:20.947 06:53:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.947 06:53:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.207 ************************************ 00:06:21.207 START TEST accel_compare 00:06:21.207 ************************************ 00:06:21.207 06:53:50 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:21.207 [2024-07-13 06:53:50.432233] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:21.207 [2024-07-13 06:53:50.432297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393271 ] 00:06:21.207 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.207 [2024-07-13 06:53:50.465618] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.207 [2024-07-13 06:53:50.495519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.207 [2024-07-13 06:53:50.588645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.207 06:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:22.580 06:53:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.580 00:06:22.580 real 0m1.404s 00:06:22.580 user 0m1.259s 00:06:22.580 sys 0m0.146s 00:06:22.580 06:53:51 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.580 06:53:51 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:22.580 ************************************ 00:06:22.580 END TEST accel_compare 00:06:22.580 ************************************ 00:06:22.580 06:53:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.580 06:53:51 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:22.580 06:53:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:22.580 06:53:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.580 06:53:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.580 ************************************ 00:06:22.580 START TEST accel_xor 00:06:22.580 ************************************ 00:06:22.580 06:53:51 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:22.580 06:53:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:22.580 [2024-07-13 06:53:51.882759] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:22.580 [2024-07-13 06:53:51.882823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393428 ] 00:06:22.580 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.580 [2024-07-13 06:53:51.915157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.580 [2024-07-13 06:53:51.945094] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.843 [2024-07-13 06:53:52.039436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:22.843 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.844 06:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.215 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.216 00:06:24.216 real 0m1.412s 00:06:24.216 user 0m1.265s 00:06:24.216 sys 0m0.148s 00:06:24.216 06:53:53 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.216 06:53:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:24.216 ************************************ 00:06:24.216 END TEST accel_xor 00:06:24.216 ************************************ 00:06:24.216 06:53:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.216 06:53:53 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:24.216 06:53:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:24.216 06:53:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.216 06:53:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.216 ************************************ 00:06:24.216 START TEST accel_xor 00:06:24.216 ************************************ 00:06:24.216 06:53:53 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:24.216 [2024-07-13 06:53:53.341736] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:24.216 [2024-07-13 06:53:53.341818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393590 ] 00:06:24.216 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.216 [2024-07-13 06:53:53.373484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.216 [2024-07-13 06:53:53.403688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.216 [2024-07-13 06:53:53.495332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.216 06:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:25.586 06:53:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.586 00:06:25.586 real 0m1.394s 00:06:25.586 user 0m1.258s 00:06:25.586 sys 0m0.137s 00:06:25.586 06:53:54 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.586 06:53:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:25.586 ************************************ 00:06:25.586 END TEST accel_xor 00:06:25.586 ************************************ 00:06:25.586 06:53:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.586 06:53:54 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:25.586 06:53:54 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:25.586 06:53:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.586 06:53:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.586 ************************************ 00:06:25.586 START TEST accel_dif_verify 00:06:25.586 ************************************ 00:06:25.586 06:53:54 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:25.586 [2024-07-13 06:53:54.783311] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:25.586 [2024-07-13 06:53:54.783370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393854 ] 00:06:25.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.586 [2024-07-13 06:53:54.815241] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.586 [2024-07-13 06:53:54.847267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.586 [2024-07-13 06:53:54.937668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.586 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.587 06:53:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.587 06:53:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.587 06:53:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.587 06:53:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.956 06:53:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.956 06:53:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.956 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.956 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:26.957 06:53:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.957 00:06:26.957 real 0m1.388s 00:06:26.957 user 0m1.259s 00:06:26.957 sys 0m0.132s 00:06:26.957 06:53:56 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.957 06:53:56 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.957 ************************************ 00:06:26.957 END TEST accel_dif_verify 00:06:26.957 ************************************ 00:06:26.957 06:53:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.957 06:53:56 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:26.957 06:53:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:26.957 06:53:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.957 06:53:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.957 ************************************ 00:06:26.957 START TEST accel_dif_generate 00:06:26.957 ************************************ 00:06:26.957 06:53:56 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:26.957 06:53:56 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:26.957 [2024-07-13 06:53:56.215693] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:26.957 [2024-07-13 06:53:56.215764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394018 ] 00:06:26.957 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.957 [2024-07-13 06:53:56.248038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.957 [2024-07-13 06:53:56.278283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.957 [2024-07-13 06:53:56.370874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.216 06:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:28.588 06:53:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.588 00:06:28.588 real 0m1.409s 00:06:28.588 user 0m1.264s 00:06:28.588 sys 0m0.148s 00:06:28.588 06:53:57 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.588 06:53:57 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:28.588 ************************************ 00:06:28.588 END TEST accel_dif_generate 00:06:28.588 ************************************ 00:06:28.588 06:53:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.588 06:53:57 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:28.588 06:53:57 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:28.588 06:53:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.588 06:53:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.588 ************************************ 00:06:28.588 START TEST accel_dif_generate_copy 00:06:28.588 ************************************ 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:28.588 [2024-07-13 06:53:57.670700] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:28.588 [2024-07-13 06:53:57.670767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394169 ] 00:06:28.588 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.588 [2024-07-13 06:53:57.702781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.588 [2024-07-13 06:53:57.732702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.588 [2024-07-13 06:53:57.826042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.588 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.589 06:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.960 00:06:29.960 real 0m1.399s 00:06:29.960 user 0m1.254s 00:06:29.960 sys 0m0.147s 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.960 06:53:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:29.960 ************************************ 00:06:29.960 END TEST accel_dif_generate_copy 00:06:29.960 ************************************ 00:06:29.960 06:53:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.960 06:53:59 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:29.960 06:53:59 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.960 06:53:59 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:29.960 06:53:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.960 06:53:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.960 ************************************ 00:06:29.960 START TEST accel_comp 00:06:29.960 ************************************ 00:06:29.960 06:53:59 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:29.960 [2024-07-13 06:53:59.116049] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:29.960 [2024-07-13 06:53:59.116110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394412 ] 00:06:29.960 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.960 [2024-07-13 06:53:59.148308] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.960 [2024-07-13 06:53:59.178600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.960 [2024-07-13 06:53:59.270006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.960 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.961 06:53:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:31.332 06:54:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.332 00:06:31.332 real 0m1.400s 00:06:31.332 user 0m1.259s 00:06:31.332 sys 0m0.145s 00:06:31.332 06:54:00 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.332 06:54:00 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:31.332 ************************************ 00:06:31.332 END TEST accel_comp 00:06:31.332 ************************************ 00:06:31.332 06:54:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.332 06:54:00 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.332 06:54:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:31.332 06:54:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.332 06:54:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.332 ************************************ 00:06:31.332 START TEST accel_decomp 00:06:31.332 ************************************ 00:06:31.332 06:54:00 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:31.332 [2024-07-13 06:54:00.562198] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:31.332 [2024-07-13 06:54:00.562270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394640 ] 00:06:31.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.332 [2024-07-13 06:54:00.594937] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.332 [2024-07-13 06:54:00.625696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.332 [2024-07-13 06:54:00.718225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.332 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.333 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.590 06:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.522 06:54:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.522 00:06:32.522 real 0m1.400s 00:06:32.522 user 0m1.257s 00:06:32.522 sys 0m0.146s 00:06:32.522 06:54:01 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.522 06:54:01 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:32.522 ************************************ 00:06:32.522 END TEST accel_decomp 00:06:32.522 ************************************ 00:06:32.522 06:54:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.522 06:54:01 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.522 06:54:01 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:32.522 06:54:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.522 06:54:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.780 ************************************ 00:06:32.780 START TEST accel_decomp_full 00:06:32.780 ************************************ 00:06:32.780 06:54:01 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:32.780 06:54:01 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:32.780 [2024-07-13 06:54:02.013198] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:32.780 [2024-07-13 06:54:02.013269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394874 ] 00:06:32.780 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.780 [2024-07-13 06:54:02.045940] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.780 [2024-07-13 06:54:02.077838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.780 [2024-07-13 06:54:02.169958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.780 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.038 06:54:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.970 06:54:03 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.970 00:06:33.970 real 0m1.428s 00:06:33.970 user 0m1.274s 00:06:33.970 sys 0m0.158s 00:06:33.970 06:54:03 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.970 06:54:03 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:33.970 ************************************ 00:06:33.970 END TEST accel_decomp_full 00:06:33.970 ************************************ 00:06:34.228 06:54:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.228 06:54:03 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.228 06:54:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:34.228 06:54:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.228 06:54:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.228 ************************************ 00:06:34.228 START TEST accel_decomp_mcore 00:06:34.228 ************************************ 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:34.228 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:34.228 [2024-07-13 06:54:03.484329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:34.228 [2024-07-13 06:54:03.484397] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395027 ] 00:06:34.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.228 [2024-07-13 06:54:03.516730] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.228 [2024-07-13 06:54:03.547205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.228 [2024-07-13 06:54:03.644458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.228 [2024-07-13 06:54:03.644512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.228 [2024-07-13 06:54:03.644578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.228 [2024-07-13 06:54:03.644581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.486 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.487 06:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.422 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.681 00:06:35.681 real 0m1.411s 00:06:35.681 user 0m4.690s 00:06:35.681 sys 0m0.152s 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.681 06:54:04 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:35.681 ************************************ 00:06:35.681 END TEST accel_decomp_mcore 00:06:35.681 ************************************ 00:06:35.681 06:54:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.681 06:54:04 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.681 06:54:04 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:35.681 06:54:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.681 06:54:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.681 ************************************ 00:06:35.681 START TEST accel_decomp_full_mcore 00:06:35.681 ************************************ 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:35.681 06:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:35.681 [2024-07-13 06:54:04.943727] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:35.681 [2024-07-13 06:54:04.943792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395296 ] 00:06:35.681 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.681 [2024-07-13 06:54:04.975718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.681 [2024-07-13 06:54:05.004801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.681 [2024-07-13 06:54:05.103890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.681 [2024-07-13 06:54:05.103949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.681 [2024-07-13 06:54:05.104025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.681 [2024-07-13 06:54:05.104028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.940 06:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.317 00:06:37.317 real 0m1.422s 00:06:37.317 user 0m4.737s 00:06:37.317 sys 0m0.147s 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.317 06:54:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:37.317 ************************************ 00:06:37.317 END TEST accel_decomp_full_mcore 00:06:37.317 ************************************ 00:06:37.317 06:54:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.317 06:54:06 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.317 06:54:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:37.317 06:54:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.317 06:54:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.317 ************************************ 00:06:37.317 START TEST accel_decomp_mthread 00:06:37.317 ************************************ 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:37.317 [2024-07-13 06:54:06.411451] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:37.317 [2024-07-13 06:54:06.411517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395463 ] 00:06:37.317 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.317 [2024-07-13 06:54:06.443628] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.317 [2024-07-13 06:54:06.473924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.317 [2024-07-13 06:54:06.565751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.317 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.318 06:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.692 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.693 00:06:38.693 real 0m1.399s 00:06:38.693 user 0m1.246s 00:06:38.693 sys 0m0.155s 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.693 06:54:07 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:38.693 ************************************ 00:06:38.693 END TEST accel_decomp_mthread 00:06:38.693 ************************************ 00:06:38.693 06:54:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.693 06:54:07 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.693 06:54:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:38.693 06:54:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.693 06:54:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.693 ************************************ 00:06:38.693 START TEST accel_decomp_full_mthread 00:06:38.693 ************************************ 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:38.693 06:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:38.693 [2024-07-13 06:54:07.858805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:38.693 [2024-07-13 06:54:07.858881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395868 ] 00:06:38.693 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.693 [2024-07-13 06:54:07.892128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.693 [2024-07-13 06:54:07.923905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.693 [2024-07-13 06:54:08.017010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.693 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.694 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.694 06:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.096 00:06:40.096 real 0m1.433s 00:06:40.096 user 0m1.286s 00:06:40.096 sys 0m0.149s 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.096 06:54:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:40.096 ************************************ 00:06:40.096 END TEST accel_decomp_full_mthread 00:06:40.096 ************************************ 00:06:40.096 06:54:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.096 06:54:09 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:40.096 06:54:09 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.096 06:54:09 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:40.096 06:54:09 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:40.096 06:54:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.096 06:54:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.096 06:54:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.096 06:54:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.096 06:54:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.096 06:54:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.096 06:54:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.096 06:54:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:40.096 06:54:09 accel -- accel/accel.sh@41 -- # jq -r . 00:06:40.096 ************************************ 00:06:40.096 START TEST accel_dif_functional_tests 00:06:40.096 ************************************ 00:06:40.096 06:54:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.096 [2024-07-13 06:54:09.357990] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:40.096 [2024-07-13 06:54:09.358058] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396383 ] 00:06:40.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.096 [2024-07-13 06:54:09.390385] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.096 [2024-07-13 06:54:09.422141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.096 [2024-07-13 06:54:09.514490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.096 [2024-07-13 06:54:09.514554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.096 [2024-07-13 06:54:09.514557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.354 00:06:40.354 00:06:40.354 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.354 http://cunit.sourceforge.net/ 00:06:40.354 00:06:40.354 00:06:40.354 Suite: accel_dif 00:06:40.354 Test: verify: DIF generated, GUARD check ...passed 00:06:40.354 Test: verify: DIF generated, APPTAG check ...passed 00:06:40.354 Test: verify: DIF generated, REFTAG check ...passed 00:06:40.354 Test: verify: DIF not generated, GUARD check ...[2024-07-13 06:54:09.608993] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.354 passed 00:06:40.354 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 06:54:09.609081] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.354 passed 00:06:40.354 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 06:54:09.609114] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.354 passed 00:06:40.354 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:40.354 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 06:54:09.609189] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:40.354 passed 00:06:40.354 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:40.354 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:40.354 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:40.354 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 06:54:09.609314] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:40.354 passed 00:06:40.354 Test: verify copy: DIF generated, GUARD check ...passed 00:06:40.354 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:40.354 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:40.354 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 06:54:09.609485] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.354 passed 00:06:40.354 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 06:54:09.609519] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.354 passed 00:06:40.354 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 06:54:09.609556] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.354 passed 00:06:40.354 Test: generate copy: DIF generated, GUARD check ...passed 00:06:40.354 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:40.354 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:40.354 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:40.354 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:40.354 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:40.354 Test: generate copy: iovecs-len validate ...[2024-07-13 06:54:09.609761] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:40.354 passed 00:06:40.354 Test: generate copy: buffer alignment validate ...passed 00:06:40.354 00:06:40.354 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.354 suites 1 1 n/a 0 0 00:06:40.354 tests 26 26 26 0 0 00:06:40.354 asserts 115 115 115 0 n/a 00:06:40.354 00:06:40.354 Elapsed time = 0.003 seconds 00:06:40.612 00:06:40.612 real 0m0.501s 00:06:40.612 user 0m0.775s 00:06:40.612 sys 0m0.188s 00:06:40.612 06:54:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.612 06:54:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:40.612 ************************************ 00:06:40.612 END TEST accel_dif_functional_tests 00:06:40.612 ************************************ 00:06:40.612 06:54:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.612 00:06:40.612 real 0m31.695s 00:06:40.612 user 0m35.028s 00:06:40.612 sys 0m4.610s 00:06:40.612 06:54:09 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.612 06:54:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.612 ************************************ 00:06:40.612 END TEST accel 00:06:40.612 ************************************ 00:06:40.612 06:54:09 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.612 06:54:09 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.612 06:54:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.612 06:54:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.612 06:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:40.612 ************************************ 00:06:40.612 START TEST accel_rpc 00:06:40.612 ************************************ 00:06:40.612 06:54:09 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.612 * Looking for test storage... 00:06:40.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:40.612 06:54:09 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.612 06:54:09 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1396471 00:06:40.612 06:54:09 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:40.612 06:54:09 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1396471 00:06:40.612 06:54:09 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1396471 ']' 00:06:40.612 06:54:09 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.612 06:54:09 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.612 06:54:09 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.612 06:54:09 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.612 06:54:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.612 [2024-07-13 06:54:09.990036] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:40.612 [2024-07-13 06:54:09.990124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396471 ] 00:06:40.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.612 [2024-07-13 06:54:10.025540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.612 [2024-07-13 06:54:10.052831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.870 [2024-07-13 06:54:10.142699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.870 06:54:10 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.870 06:54:10 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:40.870 06:54:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:40.870 06:54:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:40.870 06:54:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:40.870 06:54:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:40.870 06:54:10 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:40.870 06:54:10 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.870 06:54:10 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.870 06:54:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.870 ************************************ 00:06:40.870 START TEST accel_assign_opcode 00:06:40.870 ************************************ 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:40.870 [2024-07-13 06:54:10.227414] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:40.870 [2024-07-13 06:54:10.235427] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.870 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.127 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.127 06:54:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:41.127 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.127 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.127 06:54:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:41.127 06:54:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:41.127 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.127 software 00:06:41.127 00:06:41.127 real 0m0.307s 00:06:41.127 user 0m0.039s 00:06:41.127 sys 0m0.007s 00:06:41.127 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.127 06:54:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.127 ************************************ 00:06:41.127 END TEST accel_assign_opcode 00:06:41.127 ************************************ 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:41.127 06:54:10 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1396471 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1396471 ']' 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1396471 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1396471 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1396471' 00:06:41.127 killing process with pid 1396471 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@967 -- # kill 1396471 00:06:41.127 06:54:10 accel_rpc -- common/autotest_common.sh@972 -- # wait 1396471 00:06:41.694 00:06:41.694 real 0m1.111s 00:06:41.694 user 0m1.026s 00:06:41.694 sys 0m0.443s 00:06:41.694 06:54:10 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.694 06:54:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.694 ************************************ 00:06:41.694 END TEST accel_rpc 00:06:41.694 ************************************ 00:06:41.694 06:54:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.694 06:54:11 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:41.694 06:54:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.694 06:54:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.694 06:54:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.694 ************************************ 00:06:41.694 START TEST app_cmdline 00:06:41.694 ************************************ 00:06:41.694 06:54:11 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:41.694 * Looking for test storage... 00:06:41.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:41.695 06:54:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:41.695 06:54:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1396679 00:06:41.695 06:54:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:41.695 06:54:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1396679 00:06:41.695 06:54:11 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1396679 ']' 00:06:41.695 06:54:11 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.695 06:54:11 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.695 06:54:11 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.695 06:54:11 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.695 06:54:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.695 [2024-07-13 06:54:11.147405] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:41.695 [2024-07-13 06:54:11.147507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396679 ] 00:06:41.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.954 [2024-07-13 06:54:11.180755] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.954 [2024-07-13 06:54:11.207518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.954 [2024-07-13 06:54:11.291379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.212 06:54:11 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.212 06:54:11 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:42.212 06:54:11 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:42.469 { 00:06:42.469 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:06:42.469 "fields": { 00:06:42.469 "major": 24, 00:06:42.469 "minor": 9, 00:06:42.469 "patch": 0, 00:06:42.469 "suffix": "-pre", 00:06:42.469 "commit": "719d03c6a" 00:06:42.469 } 00:06:42.469 } 00:06:42.469 06:54:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:42.469 06:54:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:42.469 06:54:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:42.469 06:54:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:42.469 06:54:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:42.469 06:54:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.469 06:54:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.469 06:54:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:42.469 06:54:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:42.469 06:54:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:42.469 06:54:11 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.727 request: 00:06:42.727 { 00:06:42.727 "method": "env_dpdk_get_mem_stats", 00:06:42.727 "req_id": 1 00:06:42.727 } 00:06:42.727 Got JSON-RPC error response 00:06:42.727 response: 00:06:42.727 { 00:06:42.727 "code": -32601, 00:06:42.727 "message": "Method not found" 00:06:42.727 } 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.727 06:54:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1396679 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1396679 ']' 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1396679 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1396679 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1396679' 00:06:42.727 killing process with pid 1396679 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@967 -- # kill 1396679 00:06:42.727 06:54:12 app_cmdline -- common/autotest_common.sh@972 -- # wait 1396679 00:06:43.294 00:06:43.294 real 0m1.461s 00:06:43.294 user 0m1.764s 00:06:43.294 sys 0m0.468s 00:06:43.294 06:54:12 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.294 06:54:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.294 ************************************ 00:06:43.294 END TEST app_cmdline 00:06:43.294 ************************************ 00:06:43.294 06:54:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:43.294 06:54:12 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:43.294 06:54:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.294 06:54:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.294 06:54:12 -- common/autotest_common.sh@10 -- # set +x 00:06:43.294 ************************************ 00:06:43.294 START TEST version 00:06:43.294 ************************************ 00:06:43.294 06:54:12 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:43.294 * Looking for test storage... 00:06:43.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:43.294 06:54:12 version -- app/version.sh@17 -- # get_header_version major 00:06:43.294 06:54:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.294 06:54:12 version -- app/version.sh@14 -- # cut -f2 00:06:43.294 06:54:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.294 06:54:12 version -- app/version.sh@17 -- # major=24 00:06:43.294 06:54:12 version -- app/version.sh@18 -- # get_header_version minor 00:06:43.294 06:54:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.294 06:54:12 version -- app/version.sh@14 -- # cut -f2 00:06:43.294 06:54:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.294 06:54:12 version -- app/version.sh@18 -- # minor=9 00:06:43.294 06:54:12 version -- app/version.sh@19 -- # get_header_version patch 00:06:43.294 06:54:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.294 06:54:12 version -- app/version.sh@14 -- # cut -f2 00:06:43.294 06:54:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.294 06:54:12 version -- app/version.sh@19 -- # patch=0 00:06:43.294 06:54:12 version -- app/version.sh@20 -- # get_header_version suffix 00:06:43.294 06:54:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.294 06:54:12 version -- app/version.sh@14 -- # cut -f2 00:06:43.294 06:54:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.294 06:54:12 version -- app/version.sh@20 -- # suffix=-pre 00:06:43.294 06:54:12 version -- app/version.sh@22 -- # version=24.9 00:06:43.294 06:54:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:43.294 06:54:12 version -- app/version.sh@28 -- # version=24.9rc0 00:06:43.294 06:54:12 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:43.294 06:54:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:43.294 06:54:12 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:43.294 06:54:12 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:43.294 00:06:43.294 real 0m0.110s 00:06:43.294 user 0m0.055s 00:06:43.294 sys 0m0.075s 00:06:43.294 06:54:12 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.294 06:54:12 version -- common/autotest_common.sh@10 -- # set +x 00:06:43.294 ************************************ 00:06:43.294 END TEST version 00:06:43.294 ************************************ 00:06:43.294 06:54:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:43.294 06:54:12 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:43.294 06:54:12 -- spdk/autotest.sh@198 -- # uname -s 00:06:43.294 06:54:12 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:43.294 06:54:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:43.294 06:54:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:43.294 06:54:12 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:43.294 06:54:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:43.294 06:54:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:43.294 06:54:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:43.294 06:54:12 -- common/autotest_common.sh@10 -- # set +x 00:06:43.294 06:54:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:43.294 06:54:12 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:43.294 06:54:12 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:43.294 06:54:12 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:43.294 06:54:12 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:43.294 06:54:12 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:43.294 06:54:12 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:43.294 06:54:12 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:43.294 06:54:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.294 06:54:12 -- common/autotest_common.sh@10 -- # set +x 00:06:43.294 ************************************ 00:06:43.294 START TEST nvmf_tcp 00:06:43.294 ************************************ 00:06:43.294 06:54:12 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:43.554 * Looking for test storage... 00:06:43.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.554 06:54:12 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.554 06:54:12 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.554 06:54:12 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.554 06:54:12 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.554 06:54:12 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.554 06:54:12 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.554 06:54:12 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:43.554 06:54:12 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:43.554 06:54:12 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:43.554 06:54:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:43.554 06:54:12 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:43.554 06:54:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:43.554 06:54:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.554 06:54:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.554 ************************************ 00:06:43.554 START TEST nvmf_example 00:06:43.554 ************************************ 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:43.554 * Looking for test storage... 00:06:43.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.554 06:54:12 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:43.555 06:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:45.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:45.453 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:45.454 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:45.454 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:45.454 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.454 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:45.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:06:45.712 00:06:45.712 --- 10.0.0.2 ping statistics --- 00:06:45.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.712 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:06:45.712 00:06:45.712 --- 10.0.0.1 ping statistics --- 00:06:45.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.712 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:45.712 06:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1398688 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1398688 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1398688 ']' 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.712 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.712 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:45.970 06:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:45.970 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.167 Initializing NVMe Controllers 00:06:58.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:58.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:58.167 Initialization complete. Launching workers. 00:06:58.167 ======================================================== 00:06:58.167 Latency(us) 00:06:58.167 Device Information : IOPS MiB/s Average min max 00:06:58.167 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14948.95 58.39 4280.87 897.72 15264.10 00:06:58.167 ======================================================== 00:06:58.167 Total : 14948.95 58.39 4280.87 897.72 15264.10 00:06:58.167 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:58.167 rmmod nvme_tcp 00:06:58.167 rmmod nvme_fabrics 00:06:58.167 rmmod nvme_keyring 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1398688 ']' 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1398688 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1398688 ']' 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1398688 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1398688 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1398688' 00:06:58.167 killing process with pid 1398688 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1398688 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1398688 00:06:58.167 nvmf threads initialize successfully 00:06:58.167 bdev subsystem init successfully 00:06:58.167 created a nvmf target service 00:06:58.167 create targets's poll groups done 00:06:58.167 all subsystems of target started 00:06:58.167 nvmf target is running 00:06:58.167 all subsystems of target stopped 00:06:58.167 destroy targets's poll groups done 00:06:58.167 destroyed the nvmf target service 00:06:58.167 bdev subsystem finish successfully 00:06:58.167 nvmf threads destroy successfully 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.167 06:54:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.743 06:54:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:58.743 06:54:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:58.743 06:54:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:58.743 06:54:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.743 00:06:58.743 real 0m15.167s 00:06:58.743 user 0m42.199s 00:06:58.743 sys 0m3.166s 00:06:58.743 06:54:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.743 06:54:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.743 ************************************ 00:06:58.743 END TEST nvmf_example 00:06:58.743 ************************************ 00:06:58.743 06:54:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:58.743 06:54:28 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:58.743 06:54:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.743 06:54:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.743 06:54:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.743 ************************************ 00:06:58.743 START TEST nvmf_filesystem 00:06:58.744 ************************************ 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:58.744 * Looking for test storage... 00:06:58.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:58.744 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:58.744 #define SPDK_CONFIG_H 00:06:58.744 #define SPDK_CONFIG_APPS 1 00:06:58.744 #define SPDK_CONFIG_ARCH native 00:06:58.744 #undef SPDK_CONFIG_ASAN 00:06:58.744 #undef SPDK_CONFIG_AVAHI 00:06:58.744 #undef SPDK_CONFIG_CET 00:06:58.744 #define SPDK_CONFIG_COVERAGE 1 00:06:58.744 #define SPDK_CONFIG_CROSS_PREFIX 00:06:58.744 #undef SPDK_CONFIG_CRYPTO 00:06:58.744 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:58.744 #undef SPDK_CONFIG_CUSTOMOCF 00:06:58.744 #undef SPDK_CONFIG_DAOS 00:06:58.744 #define SPDK_CONFIG_DAOS_DIR 00:06:58.744 #define SPDK_CONFIG_DEBUG 1 00:06:58.744 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:58.744 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:58.744 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:06:58.745 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:58.745 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:58.745 #undef SPDK_CONFIG_DPDK_UADK 00:06:58.745 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:58.745 #define SPDK_CONFIG_EXAMPLES 1 00:06:58.745 #undef SPDK_CONFIG_FC 00:06:58.745 #define SPDK_CONFIG_FC_PATH 00:06:58.745 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:58.745 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:58.745 #undef SPDK_CONFIG_FUSE 00:06:58.745 #undef SPDK_CONFIG_FUZZER 00:06:58.745 #define SPDK_CONFIG_FUZZER_LIB 00:06:58.745 #undef SPDK_CONFIG_GOLANG 00:06:58.745 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:58.745 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:58.745 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:58.745 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:58.745 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:58.745 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:58.745 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:58.745 #define SPDK_CONFIG_IDXD 1 00:06:58.745 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:58.745 #undef SPDK_CONFIG_IPSEC_MB 00:06:58.745 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:58.745 #define SPDK_CONFIG_ISAL 1 00:06:58.745 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:58.745 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:58.745 #define SPDK_CONFIG_LIBDIR 00:06:58.745 #undef SPDK_CONFIG_LTO 00:06:58.745 #define SPDK_CONFIG_MAX_LCORES 128 00:06:58.745 #define SPDK_CONFIG_NVME_CUSE 1 00:06:58.745 #undef SPDK_CONFIG_OCF 00:06:58.745 #define SPDK_CONFIG_OCF_PATH 00:06:58.745 #define SPDK_CONFIG_OPENSSL_PATH 00:06:58.745 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:58.745 #define SPDK_CONFIG_PGO_DIR 00:06:58.745 #undef SPDK_CONFIG_PGO_USE 00:06:58.745 #define SPDK_CONFIG_PREFIX /usr/local 00:06:58.745 #undef SPDK_CONFIG_RAID5F 00:06:58.745 #undef SPDK_CONFIG_RBD 00:06:58.745 #define SPDK_CONFIG_RDMA 1 00:06:58.745 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:58.745 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:58.745 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:58.745 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:58.745 #define SPDK_CONFIG_SHARED 1 00:06:58.745 #undef SPDK_CONFIG_SMA 00:06:58.745 #define SPDK_CONFIG_TESTS 1 00:06:58.745 #undef SPDK_CONFIG_TSAN 00:06:58.745 #define SPDK_CONFIG_UBLK 1 00:06:58.745 #define SPDK_CONFIG_UBSAN 1 00:06:58.745 #undef SPDK_CONFIG_UNIT_TESTS 00:06:58.745 #undef SPDK_CONFIG_URING 00:06:58.745 #define SPDK_CONFIG_URING_PATH 00:06:58.745 #undef SPDK_CONFIG_URING_ZNS 00:06:58.745 #undef SPDK_CONFIG_USDT 00:06:58.745 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:58.745 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:58.745 #define SPDK_CONFIG_VFIO_USER 1 00:06:58.745 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:58.745 #define SPDK_CONFIG_VHOST 1 00:06:58.745 #define SPDK_CONFIG_VIRTIO 1 00:06:58.745 #undef SPDK_CONFIG_VTUNE 00:06:58.745 #define SPDK_CONFIG_VTUNE_DIR 00:06:58.745 #define SPDK_CONFIG_WERROR 1 00:06:58.745 #define SPDK_CONFIG_WPDK_DIR 00:06:58.745 #undef SPDK_CONFIG_XNVME 00:06:58.745 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:58.745 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:58.746 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1400265 ]] 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1400265 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.CK3Uu8 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.CK3Uu8/tests/target /tmp/spdk.CK3Uu8 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=54057742336 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7936966656 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996508672 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=847872 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:58.747 * Looking for test storage... 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=54057742336 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10151559168 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:58.747 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:58.748 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:58.748 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:58.748 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:58.748 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:58.748 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:58.748 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:58.748 06:54:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.748 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:58.748 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.748 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.037 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:59.038 06:54:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:00.939 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:00.939 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:00.939 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:00.939 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:07:00.939 00:07:00.939 --- 10.0.0.2 ping statistics --- 00:07:00.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.939 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:07:00.939 00:07:00.939 --- 10.0.0.1 ping statistics --- 00:07:00.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.939 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.939 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.940 06:54:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.940 06:54:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:00.940 06:54:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:00.940 06:54:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.940 06:54:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.198 ************************************ 00:07:01.198 START TEST nvmf_filesystem_no_in_capsule 00:07:01.198 ************************************ 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1401898 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1401898 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1401898 ']' 00:07:01.198 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.199 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.199 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.199 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.199 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.199 [2024-07-13 06:54:30.461581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:01.199 [2024-07-13 06:54:30.461674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.199 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.199 [2024-07-13 06:54:30.499978] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.199 [2024-07-13 06:54:30.531916] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.199 [2024-07-13 06:54:30.626140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.199 [2024-07-13 06:54:30.626201] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.199 [2024-07-13 06:54:30.626230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.199 [2024-07-13 06:54:30.626244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.199 [2024-07-13 06:54:30.626257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.199 [2024-07-13 06:54:30.626337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.199 [2024-07-13 06:54:30.626392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.199 [2024-07-13 06:54:30.626510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.199 [2024-07-13 06:54:30.626512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.457 [2024-07-13 06:54:30.783618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.457 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.715 Malloc1 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.715 [2024-07-13 06:54:30.956313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:01.715 { 00:07:01.715 "name": "Malloc1", 00:07:01.715 "aliases": [ 00:07:01.715 "9664dab8-6da8-47d5-b824-7cb5f630b3ca" 00:07:01.715 ], 00:07:01.715 "product_name": "Malloc disk", 00:07:01.715 "block_size": 512, 00:07:01.715 "num_blocks": 1048576, 00:07:01.715 "uuid": "9664dab8-6da8-47d5-b824-7cb5f630b3ca", 00:07:01.715 "assigned_rate_limits": { 00:07:01.715 "rw_ios_per_sec": 0, 00:07:01.715 "rw_mbytes_per_sec": 0, 00:07:01.715 "r_mbytes_per_sec": 0, 00:07:01.715 "w_mbytes_per_sec": 0 00:07:01.715 }, 00:07:01.715 "claimed": true, 00:07:01.715 "claim_type": "exclusive_write", 00:07:01.715 "zoned": false, 00:07:01.715 "supported_io_types": { 00:07:01.715 "read": true, 00:07:01.715 "write": true, 00:07:01.715 "unmap": true, 00:07:01.715 "flush": true, 00:07:01.715 "reset": true, 00:07:01.715 "nvme_admin": false, 00:07:01.715 "nvme_io": false, 00:07:01.715 "nvme_io_md": false, 00:07:01.715 "write_zeroes": true, 00:07:01.715 "zcopy": true, 00:07:01.715 "get_zone_info": false, 00:07:01.715 "zone_management": false, 00:07:01.715 "zone_append": false, 00:07:01.715 "compare": false, 00:07:01.715 "compare_and_write": false, 00:07:01.715 "abort": true, 00:07:01.715 "seek_hole": false, 00:07:01.715 "seek_data": false, 00:07:01.715 "copy": true, 00:07:01.715 "nvme_iov_md": false 00:07:01.715 }, 00:07:01.715 "memory_domains": [ 00:07:01.715 { 00:07:01.715 "dma_device_id": "system", 00:07:01.715 "dma_device_type": 1 00:07:01.715 }, 00:07:01.715 { 00:07:01.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.715 "dma_device_type": 2 00:07:01.715 } 00:07:01.715 ], 00:07:01.715 "driver_specific": {} 00:07:01.715 } 00:07:01.715 ]' 00:07:01.715 06:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:01.715 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:01.715 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:01.715 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:01.715 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:01.715 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:01.715 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:01.715 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:02.280 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:02.280 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:02.280 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:02.280 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:02.280 06:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:04.804 06:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:05.370 06:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.303 ************************************ 00:07:06.303 START TEST filesystem_ext4 00:07:06.303 ************************************ 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:06.303 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:06.303 mke2fs 1.46.5 (30-Dec-2021) 00:07:06.561 Discarding device blocks: 0/522240 done 00:07:06.561 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:06.561 Filesystem UUID: b7b60634-6b9b-4090-936c-0c31e4b57db9 00:07:06.561 Superblock backups stored on blocks: 00:07:06.561 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:06.561 00:07:06.561 Allocating group tables: 0/64 done 00:07:06.561 Writing inode tables: 0/64 done 00:07:06.561 Creating journal (8192 blocks): done 00:07:06.561 Writing superblocks and filesystem accounting information: 0/64 done 00:07:06.561 00:07:06.561 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:06.561 06:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:06.561 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:06.819 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:06.819 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:06.819 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:06.819 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:06.819 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1401898 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:06.820 00:07:06.820 real 0m0.461s 00:07:06.820 user 0m0.021s 00:07:06.820 sys 0m0.054s 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:06.820 ************************************ 00:07:06.820 END TEST filesystem_ext4 00:07:06.820 ************************************ 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.820 ************************************ 00:07:06.820 START TEST filesystem_btrfs 00:07:06.820 ************************************ 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:06.820 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:07.385 btrfs-progs v6.6.2 00:07:07.385 See https://btrfs.readthedocs.io for more information. 00:07:07.385 00:07:07.385 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:07.385 NOTE: several default settings have changed in version 5.15, please make sure 00:07:07.385 this does not affect your deployments: 00:07:07.385 - DUP for metadata (-m dup) 00:07:07.385 - enabled no-holes (-O no-holes) 00:07:07.385 - enabled free-space-tree (-R free-space-tree) 00:07:07.385 00:07:07.385 Label: (null) 00:07:07.385 UUID: 6f4132b0-a864-4916-80cf-a5a4c1615cdd 00:07:07.385 Node size: 16384 00:07:07.385 Sector size: 4096 00:07:07.385 Filesystem size: 510.00MiB 00:07:07.385 Block group profiles: 00:07:07.385 Data: single 8.00MiB 00:07:07.385 Metadata: DUP 32.00MiB 00:07:07.385 System: DUP 8.00MiB 00:07:07.385 SSD detected: yes 00:07:07.385 Zoned device: no 00:07:07.385 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:07.385 Runtime features: free-space-tree 00:07:07.385 Checksum: crc32c 00:07:07.385 Number of devices: 1 00:07:07.385 Devices: 00:07:07.385 ID SIZE PATH 00:07:07.385 1 510.00MiB /dev/nvme0n1p1 00:07:07.385 00:07:07.385 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:07.385 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.643 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.643 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:07.643 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.643 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:07.643 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:07.643 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.643 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1401898 00:07:07.643 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.643 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.644 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.644 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.644 00:07:07.644 real 0m0.784s 00:07:07.644 user 0m0.021s 00:07:07.644 sys 0m0.106s 00:07:07.644 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.644 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:07.644 ************************************ 00:07:07.644 END TEST filesystem_btrfs 00:07:07.644 ************************************ 00:07:07.644 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:07.644 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:07.644 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.644 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.644 06:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.644 ************************************ 00:07:07.644 START TEST filesystem_xfs 00:07:07.644 ************************************ 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:07.644 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:07.902 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:07.902 = sectsz=512 attr=2, projid32bit=1 00:07:07.902 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:07.902 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:07.902 data = bsize=4096 blocks=130560, imaxpct=25 00:07:07.902 = sunit=0 swidth=0 blks 00:07:07.902 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:07.902 log =internal log bsize=4096 blocks=16384, version=2 00:07:07.902 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:07.902 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:08.467 Discarding blocks...Done. 00:07:08.467 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:08.467 06:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1401898 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:10.365 00:07:10.365 real 0m2.783s 00:07:10.365 user 0m0.011s 00:07:10.365 sys 0m0.064s 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:10.365 ************************************ 00:07:10.365 END TEST filesystem_xfs 00:07:10.365 ************************************ 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:10.365 06:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:10.623 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:10.623 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1401898 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1401898 ']' 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1401898 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1401898 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1401898' 00:07:10.881 killing process with pid 1401898 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1401898 00:07:10.881 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1401898 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:11.447 00:07:11.447 real 0m10.213s 00:07:11.447 user 0m39.051s 00:07:11.447 sys 0m1.606s 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.447 ************************************ 00:07:11.447 END TEST nvmf_filesystem_no_in_capsule 00:07:11.447 ************************************ 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.447 ************************************ 00:07:11.447 START TEST nvmf_filesystem_in_capsule 00:07:11.447 ************************************ 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1403316 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1403316 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1403316 ']' 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.447 06:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.447 [2024-07-13 06:54:40.725665] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:11.447 [2024-07-13 06:54:40.725744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.447 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.447 [2024-07-13 06:54:40.764223] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.447 [2024-07-13 06:54:40.790696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.447 [2024-07-13 06:54:40.878278] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.447 [2024-07-13 06:54:40.878331] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.447 [2024-07-13 06:54:40.878344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.447 [2024-07-13 06:54:40.878354] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.447 [2024-07-13 06:54:40.878364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.447 [2024-07-13 06:54:40.878447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.447 [2024-07-13 06:54:40.878513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.447 [2024-07-13 06:54:40.878579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.447 [2024-07-13 06:54:40.878581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.706 [2024-07-13 06:54:41.032799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.706 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.963 Malloc1 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.963 [2024-07-13 06:54:41.207531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:11.963 { 00:07:11.963 "name": "Malloc1", 00:07:11.963 "aliases": [ 00:07:11.963 "a778f642-611c-4b12-bf16-b6ec8e37104e" 00:07:11.963 ], 00:07:11.963 "product_name": "Malloc disk", 00:07:11.963 "block_size": 512, 00:07:11.963 "num_blocks": 1048576, 00:07:11.963 "uuid": "a778f642-611c-4b12-bf16-b6ec8e37104e", 00:07:11.963 "assigned_rate_limits": { 00:07:11.963 "rw_ios_per_sec": 0, 00:07:11.963 "rw_mbytes_per_sec": 0, 00:07:11.963 "r_mbytes_per_sec": 0, 00:07:11.963 "w_mbytes_per_sec": 0 00:07:11.963 }, 00:07:11.963 "claimed": true, 00:07:11.963 "claim_type": "exclusive_write", 00:07:11.963 "zoned": false, 00:07:11.963 "supported_io_types": { 00:07:11.963 "read": true, 00:07:11.963 "write": true, 00:07:11.963 "unmap": true, 00:07:11.963 "flush": true, 00:07:11.963 "reset": true, 00:07:11.963 "nvme_admin": false, 00:07:11.963 "nvme_io": false, 00:07:11.963 "nvme_io_md": false, 00:07:11.963 "write_zeroes": true, 00:07:11.963 "zcopy": true, 00:07:11.963 "get_zone_info": false, 00:07:11.963 "zone_management": false, 00:07:11.963 "zone_append": false, 00:07:11.963 "compare": false, 00:07:11.963 "compare_and_write": false, 00:07:11.963 "abort": true, 00:07:11.963 "seek_hole": false, 00:07:11.963 "seek_data": false, 00:07:11.963 "copy": true, 00:07:11.963 "nvme_iov_md": false 00:07:11.963 }, 00:07:11.963 "memory_domains": [ 00:07:11.963 { 00:07:11.963 "dma_device_id": "system", 00:07:11.963 "dma_device_type": 1 00:07:11.963 }, 00:07:11.963 { 00:07:11.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.963 "dma_device_type": 2 00:07:11.963 } 00:07:11.963 ], 00:07:11.963 "driver_specific": {} 00:07:11.963 } 00:07:11.963 ]' 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:11.963 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:11.964 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:11.964 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:11.964 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:11.964 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.528 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:12.528 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:12.528 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:12.528 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:12.528 06:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:14.453 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:14.453 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:14.453 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:14.710 06:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:14.710 06:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:15.643 06:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.577 ************************************ 00:07:16.577 START TEST filesystem_in_capsule_ext4 00:07:16.577 ************************************ 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:16.577 06:54:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:16.577 mke2fs 1.46.5 (30-Dec-2021) 00:07:16.577 Discarding device blocks: 0/522240 done 00:07:16.577 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:16.577 Filesystem UUID: 768c40e1-bdd7-4b9b-b59f-35ee23360ac6 00:07:16.577 Superblock backups stored on blocks: 00:07:16.577 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:16.577 00:07:16.577 Allocating group tables: 0/64 done 00:07:16.577 Writing inode tables: 0/64 done 00:07:16.835 Creating journal (8192 blocks): done 00:07:17.769 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:07:17.769 00:07:17.769 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:17.769 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:18.027 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:18.027 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:18.027 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:18.027 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:18.027 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:18.027 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1403316 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:18.286 00:07:18.286 real 0m1.614s 00:07:18.286 user 0m0.014s 00:07:18.286 sys 0m0.063s 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:18.286 ************************************ 00:07:18.286 END TEST filesystem_in_capsule_ext4 00:07:18.286 ************************************ 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.286 ************************************ 00:07:18.286 START TEST filesystem_in_capsule_btrfs 00:07:18.286 ************************************ 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:18.286 btrfs-progs v6.6.2 00:07:18.286 See https://btrfs.readthedocs.io for more information. 00:07:18.286 00:07:18.286 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:18.286 NOTE: several default settings have changed in version 5.15, please make sure 00:07:18.286 this does not affect your deployments: 00:07:18.286 - DUP for metadata (-m dup) 00:07:18.286 - enabled no-holes (-O no-holes) 00:07:18.286 - enabled free-space-tree (-R free-space-tree) 00:07:18.286 00:07:18.286 Label: (null) 00:07:18.286 UUID: 9d9f5203-28ac-4451-a033-84809b47b6aa 00:07:18.286 Node size: 16384 00:07:18.286 Sector size: 4096 00:07:18.286 Filesystem size: 510.00MiB 00:07:18.286 Block group profiles: 00:07:18.286 Data: single 8.00MiB 00:07:18.286 Metadata: DUP 32.00MiB 00:07:18.286 System: DUP 8.00MiB 00:07:18.286 SSD detected: yes 00:07:18.286 Zoned device: no 00:07:18.286 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:18.286 Runtime features: free-space-tree 00:07:18.286 Checksum: crc32c 00:07:18.286 Number of devices: 1 00:07:18.286 Devices: 00:07:18.286 ID SIZE PATH 00:07:18.286 1 510.00MiB /dev/nvme0n1p1 00:07:18.286 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:18.286 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:18.544 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:18.803 06:54:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1403316 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:18.803 00:07:18.803 real 0m0.496s 00:07:18.803 user 0m0.021s 00:07:18.803 sys 0m0.106s 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:18.803 ************************************ 00:07:18.803 END TEST filesystem_in_capsule_btrfs 00:07:18.803 ************************************ 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.803 ************************************ 00:07:18.803 START TEST filesystem_in_capsule_xfs 00:07:18.803 ************************************ 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:18.803 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:18.803 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:18.803 = sectsz=512 attr=2, projid32bit=1 00:07:18.803 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:18.803 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:18.803 data = bsize=4096 blocks=130560, imaxpct=25 00:07:18.803 = sunit=0 swidth=0 blks 00:07:18.803 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:18.803 log =internal log bsize=4096 blocks=16384, version=2 00:07:18.803 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:18.803 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:19.764 Discarding blocks...Done. 00:07:19.764 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:19.764 06:54:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1403316 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:21.663 00:07:21.663 real 0m2.655s 00:07:21.663 user 0m0.016s 00:07:21.663 sys 0m0.060s 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:21.663 ************************************ 00:07:21.663 END TEST filesystem_in_capsule_xfs 00:07:21.663 ************************************ 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:21.663 06:54:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:21.663 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:21.663 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.663 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.663 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1403316 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1403316 ']' 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1403316 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1403316 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1403316' 00:07:21.921 killing process with pid 1403316 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1403316 00:07:21.921 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1403316 00:07:22.179 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:22.179 00:07:22.179 real 0m10.933s 00:07:22.179 user 0m41.912s 00:07:22.179 sys 0m1.721s 00:07:22.179 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.179 06:54:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.179 ************************************ 00:07:22.179 END TEST nvmf_filesystem_in_capsule 00:07:22.179 ************************************ 00:07:22.179 06:54:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:22.438 rmmod nvme_tcp 00:07:22.438 rmmod nvme_fabrics 00:07:22.438 rmmod nvme_keyring 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.438 06:54:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.341 06:54:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:24.341 00:07:24.341 real 0m25.684s 00:07:24.341 user 1m21.909s 00:07:24.341 sys 0m4.928s 00:07:24.341 06:54:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.341 06:54:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.341 ************************************ 00:07:24.341 END TEST nvmf_filesystem 00:07:24.341 ************************************ 00:07:24.341 06:54:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:24.341 06:54:53 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:24.341 06:54:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.341 06:54:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.341 06:54:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.341 ************************************ 00:07:24.341 START TEST nvmf_target_discovery 00:07:24.341 ************************************ 00:07:24.341 06:54:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:24.598 * Looking for test storage... 00:07:24.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.598 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:24.599 06:54:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:26.498 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:26.498 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:26.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:26.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:26.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:07:26.498 00:07:26.498 --- 10.0.0.2 ping statistics --- 00:07:26.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.498 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:07:26.498 00:07:26.498 --- 10.0.0.1 ping statistics --- 00:07:26.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.498 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1406669 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1406669 00:07:26.498 06:54:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1406669 ']' 00:07:26.499 06:54:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.499 06:54:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.499 06:54:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.499 06:54:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.499 06:54:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.757 [2024-07-13 06:54:55.978956] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:26.757 [2024-07-13 06:54:55.979029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.757 [2024-07-13 06:54:56.020232] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.757 [2024-07-13 06:54:56.047710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.757 [2024-07-13 06:54:56.131066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.757 [2024-07-13 06:54:56.131121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.757 [2024-07-13 06:54:56.131148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.757 [2024-07-13 06:54:56.131160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.757 [2024-07-13 06:54:56.131170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.757 [2024-07-13 06:54:56.131263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.757 [2024-07-13 06:54:56.131328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.757 [2024-07-13 06:54:56.131398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.757 [2024-07-13 06:54:56.131401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.015 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.015 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:27.015 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.015 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.015 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.015 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.015 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.015 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.015 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.015 [2024-07-13 06:54:56.283739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.015 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 Null1 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 [2024-07-13 06:54:56.324043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 Null2 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 Null3 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 Null4 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.016 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:27.274 00:07:27.274 Discovery Log Number of Records 6, Generation counter 6 00:07:27.274 =====Discovery Log Entry 0====== 00:07:27.274 trtype: tcp 00:07:27.274 adrfam: ipv4 00:07:27.274 subtype: current discovery subsystem 00:07:27.274 treq: not required 00:07:27.274 portid: 0 00:07:27.274 trsvcid: 4420 00:07:27.274 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:27.274 traddr: 10.0.0.2 00:07:27.274 eflags: explicit discovery connections, duplicate discovery information 00:07:27.274 sectype: none 00:07:27.274 =====Discovery Log Entry 1====== 00:07:27.274 trtype: tcp 00:07:27.274 adrfam: ipv4 00:07:27.274 subtype: nvme subsystem 00:07:27.274 treq: not required 00:07:27.274 portid: 0 00:07:27.274 trsvcid: 4420 00:07:27.274 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:27.274 traddr: 10.0.0.2 00:07:27.274 eflags: none 00:07:27.274 sectype: none 00:07:27.274 =====Discovery Log Entry 2====== 00:07:27.274 trtype: tcp 00:07:27.274 adrfam: ipv4 00:07:27.274 subtype: nvme subsystem 00:07:27.274 treq: not required 00:07:27.274 portid: 0 00:07:27.274 trsvcid: 4420 00:07:27.274 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:27.274 traddr: 10.0.0.2 00:07:27.274 eflags: none 00:07:27.274 sectype: none 00:07:27.274 =====Discovery Log Entry 3====== 00:07:27.275 trtype: tcp 00:07:27.275 adrfam: ipv4 00:07:27.275 subtype: nvme subsystem 00:07:27.275 treq: not required 00:07:27.275 portid: 0 00:07:27.275 trsvcid: 4420 00:07:27.275 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:27.275 traddr: 10.0.0.2 00:07:27.275 eflags: none 00:07:27.275 sectype: none 00:07:27.275 =====Discovery Log Entry 4====== 00:07:27.275 trtype: tcp 00:07:27.275 adrfam: ipv4 00:07:27.275 subtype: nvme subsystem 00:07:27.275 treq: not required 00:07:27.275 portid: 0 00:07:27.275 trsvcid: 4420 00:07:27.275 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:27.275 traddr: 10.0.0.2 00:07:27.275 eflags: none 00:07:27.275 sectype: none 00:07:27.275 =====Discovery Log Entry 5====== 00:07:27.275 trtype: tcp 00:07:27.275 adrfam: ipv4 00:07:27.275 subtype: discovery subsystem referral 00:07:27.275 treq: not required 00:07:27.275 portid: 0 00:07:27.275 trsvcid: 4430 00:07:27.275 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:27.275 traddr: 10.0.0.2 00:07:27.275 eflags: none 00:07:27.275 sectype: none 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:27.275 Perform nvmf subsystem discovery via RPC 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 [ 00:07:27.275 { 00:07:27.275 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:27.275 "subtype": "Discovery", 00:07:27.275 "listen_addresses": [ 00:07:27.275 { 00:07:27.275 "trtype": "TCP", 00:07:27.275 "adrfam": "IPv4", 00:07:27.275 "traddr": "10.0.0.2", 00:07:27.275 "trsvcid": "4420" 00:07:27.275 } 00:07:27.275 ], 00:07:27.275 "allow_any_host": true, 00:07:27.275 "hosts": [] 00:07:27.275 }, 00:07:27.275 { 00:07:27.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.275 "subtype": "NVMe", 00:07:27.275 "listen_addresses": [ 00:07:27.275 { 00:07:27.275 "trtype": "TCP", 00:07:27.275 "adrfam": "IPv4", 00:07:27.275 "traddr": "10.0.0.2", 00:07:27.275 "trsvcid": "4420" 00:07:27.275 } 00:07:27.275 ], 00:07:27.275 "allow_any_host": true, 00:07:27.275 "hosts": [], 00:07:27.275 "serial_number": "SPDK00000000000001", 00:07:27.275 "model_number": "SPDK bdev Controller", 00:07:27.275 "max_namespaces": 32, 00:07:27.275 "min_cntlid": 1, 00:07:27.275 "max_cntlid": 65519, 00:07:27.275 "namespaces": [ 00:07:27.275 { 00:07:27.275 "nsid": 1, 00:07:27.275 "bdev_name": "Null1", 00:07:27.275 "name": "Null1", 00:07:27.275 "nguid": "18FE6D00AEE14FE2B5A4CE70D873437C", 00:07:27.275 "uuid": "18fe6d00-aee1-4fe2-b5a4-ce70d873437c" 00:07:27.275 } 00:07:27.275 ] 00:07:27.275 }, 00:07:27.275 { 00:07:27.275 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:27.275 "subtype": "NVMe", 00:07:27.275 "listen_addresses": [ 00:07:27.275 { 00:07:27.275 "trtype": "TCP", 00:07:27.275 "adrfam": "IPv4", 00:07:27.275 "traddr": "10.0.0.2", 00:07:27.275 "trsvcid": "4420" 00:07:27.275 } 00:07:27.275 ], 00:07:27.275 "allow_any_host": true, 00:07:27.275 "hosts": [], 00:07:27.275 "serial_number": "SPDK00000000000002", 00:07:27.275 "model_number": "SPDK bdev Controller", 00:07:27.275 "max_namespaces": 32, 00:07:27.275 "min_cntlid": 1, 00:07:27.275 "max_cntlid": 65519, 00:07:27.275 "namespaces": [ 00:07:27.275 { 00:07:27.275 "nsid": 1, 00:07:27.275 "bdev_name": "Null2", 00:07:27.275 "name": "Null2", 00:07:27.275 "nguid": "08E7C1895B534C4FB30500829701B210", 00:07:27.275 "uuid": "08e7c189-5b53-4c4f-b305-00829701b210" 00:07:27.275 } 00:07:27.275 ] 00:07:27.275 }, 00:07:27.275 { 00:07:27.275 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:27.275 "subtype": "NVMe", 00:07:27.275 "listen_addresses": [ 00:07:27.275 { 00:07:27.275 "trtype": "TCP", 00:07:27.275 "adrfam": "IPv4", 00:07:27.275 "traddr": "10.0.0.2", 00:07:27.275 "trsvcid": "4420" 00:07:27.275 } 00:07:27.275 ], 00:07:27.275 "allow_any_host": true, 00:07:27.275 "hosts": [], 00:07:27.275 "serial_number": "SPDK00000000000003", 00:07:27.275 "model_number": "SPDK bdev Controller", 00:07:27.275 "max_namespaces": 32, 00:07:27.275 "min_cntlid": 1, 00:07:27.275 "max_cntlid": 65519, 00:07:27.275 "namespaces": [ 00:07:27.275 { 00:07:27.275 "nsid": 1, 00:07:27.275 "bdev_name": "Null3", 00:07:27.275 "name": "Null3", 00:07:27.275 "nguid": "A4A03AB7A43B487DB5AC1431C2506F24", 00:07:27.275 "uuid": "a4a03ab7-a43b-487d-b5ac-1431c2506f24" 00:07:27.275 } 00:07:27.275 ] 00:07:27.275 }, 00:07:27.275 { 00:07:27.275 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:27.275 "subtype": "NVMe", 00:07:27.275 "listen_addresses": [ 00:07:27.275 { 00:07:27.275 "trtype": "TCP", 00:07:27.275 "adrfam": "IPv4", 00:07:27.275 "traddr": "10.0.0.2", 00:07:27.275 "trsvcid": "4420" 00:07:27.275 } 00:07:27.275 ], 00:07:27.275 "allow_any_host": true, 00:07:27.275 "hosts": [], 00:07:27.275 "serial_number": "SPDK00000000000004", 00:07:27.275 "model_number": "SPDK bdev Controller", 00:07:27.275 "max_namespaces": 32, 00:07:27.275 "min_cntlid": 1, 00:07:27.275 "max_cntlid": 65519, 00:07:27.275 "namespaces": [ 00:07:27.275 { 00:07:27.275 "nsid": 1, 00:07:27.275 "bdev_name": "Null4", 00:07:27.275 "name": "Null4", 00:07:27.275 "nguid": "EFFA3CF7D84142B6950AAD3CE88F10FD", 00:07:27.275 "uuid": "effa3cf7-d841-42b6-950a-ad3ce88f10fd" 00:07:27.275 } 00:07:27.275 ] 00:07:27.275 } 00:07:27.275 ] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.275 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.275 rmmod nvme_tcp 00:07:27.275 rmmod nvme_fabrics 00:07:27.275 rmmod nvme_keyring 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1406669 ']' 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1406669 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1406669 ']' 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1406669 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1406669 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1406669' 00:07:27.539 killing process with pid 1406669 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1406669 00:07:27.539 06:54:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1406669 00:07:27.796 06:54:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:27.796 06:54:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:27.796 06:54:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:27.796 06:54:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.796 06:54:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:27.796 06:54:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.796 06:54:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.796 06:54:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.696 06:54:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:29.696 00:07:29.696 real 0m5.289s 00:07:29.696 user 0m4.217s 00:07:29.696 sys 0m1.799s 00:07:29.696 06:54:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.696 06:54:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.696 ************************************ 00:07:29.696 END TEST nvmf_target_discovery 00:07:29.696 ************************************ 00:07:29.696 06:54:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:29.696 06:54:59 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:29.696 06:54:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.696 06:54:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.696 06:54:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.696 ************************************ 00:07:29.696 START TEST nvmf_referrals 00:07:29.696 ************************************ 00:07:29.696 06:54:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:29.696 * Looking for test storage... 00:07:29.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.955 06:54:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:31.872 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:31.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.872 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:31.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:31.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.873 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:07:32.132 00:07:32.132 --- 10.0.0.2 ping statistics --- 00:07:32.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.132 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:07:32.132 00:07:32.132 --- 10.0.0.1 ping statistics --- 00:07:32.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.132 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1408756 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1408756 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1408756 ']' 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.132 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.132 [2024-07-13 06:55:01.494106] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:32.132 [2024-07-13 06:55:01.494198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.132 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.132 [2024-07-13 06:55:01.532986] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.132 [2024-07-13 06:55:01.563088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.390 [2024-07-13 06:55:01.657417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.390 [2024-07-13 06:55:01.657490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.390 [2024-07-13 06:55:01.657507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.390 [2024-07-13 06:55:01.657520] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.390 [2024-07-13 06:55:01.657533] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.390 [2024-07-13 06:55:01.657624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.390 [2024-07-13 06:55:01.657687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.390 [2024-07-13 06:55:01.657812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.390 [2024-07-13 06:55:01.657814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.390 [2024-07-13 06:55:01.820741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.390 [2024-07-13 06:55:01.832971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:32.390 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:32.648 06:55:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:32.906 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:33.164 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:33.165 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:33.422 06:55:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.423 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:33.423 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:33.423 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:33.423 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:33.423 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:33.423 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:33.423 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:33.423 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:33.680 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:33.680 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:33.680 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:33.680 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:33.680 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:33.680 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:33.680 06:55:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:33.680 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:33.680 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:33.680 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:33.680 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:33.680 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:33.680 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.937 rmmod nvme_tcp 00:07:33.937 rmmod nvme_fabrics 00:07:33.937 rmmod nvme_keyring 00:07:33.937 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1408756 ']' 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1408756 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1408756 ']' 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1408756 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1408756 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1408756' 00:07:34.195 killing process with pid 1408756 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1408756 00:07:34.195 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1408756 00:07:34.454 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.454 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.454 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.454 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.454 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.454 06:55:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.454 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.454 06:55:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.353 06:55:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:36.353 00:07:36.353 real 0m6.602s 00:07:36.353 user 0m9.355s 00:07:36.353 sys 0m2.187s 00:07:36.353 06:55:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.353 06:55:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.353 ************************************ 00:07:36.353 END TEST nvmf_referrals 00:07:36.353 ************************************ 00:07:36.353 06:55:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:36.353 06:55:05 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:36.353 06:55:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:36.353 06:55:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.353 06:55:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.353 ************************************ 00:07:36.353 START TEST nvmf_connect_disconnect 00:07:36.353 ************************************ 00:07:36.353 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:36.353 * Looking for test storage... 00:07:36.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.611 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.612 06:55:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.531 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:38.532 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:38.532 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:38.532 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:38.532 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:07:38.532 00:07:38.532 --- 10.0.0.2 ping statistics --- 00:07:38.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.532 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:07:38.532 00:07:38.532 --- 10.0.0.1 ping statistics --- 00:07:38.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.532 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1411048 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1411048 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1411048 ']' 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.532 06:55:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:38.532 [2024-07-13 06:55:07.967725] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:38.532 [2024-07-13 06:55:07.967817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.790 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.790 [2024-07-13 06:55:08.005633] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.790 [2024-07-13 06:55:08.037669] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.790 [2024-07-13 06:55:08.128692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.790 [2024-07-13 06:55:08.128756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.790 [2024-07-13 06:55:08.128782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.790 [2024-07-13 06:55:08.128797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.790 [2024-07-13 06:55:08.128808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.790 [2024-07-13 06:55:08.128895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.790 [2024-07-13 06:55:08.128939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.790 [2024-07-13 06:55:08.129039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.790 [2024-07-13 06:55:08.129042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.049 [2024-07-13 06:55:08.282800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.049 [2024-07-13 06:55:08.335719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:39.049 06:55:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:41.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:30.645 rmmod nvme_tcp 00:11:30.645 rmmod nvme_fabrics 00:11:30.645 rmmod nvme_keyring 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1411048 ']' 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1411048 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1411048 ']' 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1411048 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1411048 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1411048' 00:11:30.645 killing process with pid 1411048 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1411048 00:11:30.645 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1411048 00:11:30.903 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.903 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.903 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.903 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.903 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.903 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.903 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.903 06:59:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.431 06:59:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:33.431 00:11:33.431 real 3m56.638s 00:11:33.431 user 14m59.801s 00:11:33.431 sys 0m36.408s 00:11:33.431 06:59:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.431 06:59:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.431 ************************************ 00:11:33.431 END TEST nvmf_connect_disconnect 00:11:33.431 ************************************ 00:11:33.431 06:59:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:33.431 06:59:02 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:33.431 06:59:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:33.431 06:59:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.431 06:59:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:33.431 ************************************ 00:11:33.431 START TEST nvmf_multitarget 00:11:33.431 ************************************ 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:33.431 * Looking for test storage... 00:11:33.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:33.431 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:33.432 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:33.432 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.432 06:59:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.432 06:59:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.432 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:33.432 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:33.432 06:59:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:33.432 06:59:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:35.367 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:35.367 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:35.368 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:35.368 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:35.368 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:11:35.368 00:11:35.368 --- 10.0.0.2 ping statistics --- 00:11:35.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.368 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:11:35.368 00:11:35.368 --- 10.0.0.1 ping statistics --- 00:11:35.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.368 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1442240 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1442240 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1442240 ']' 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.368 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.368 [2024-07-13 06:59:04.707829] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:35.368 [2024-07-13 06:59:04.707933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.368 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.368 [2024-07-13 06:59:04.754038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:35.368 [2024-07-13 06:59:04.781285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.627 [2024-07-13 06:59:04.868551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.627 [2024-07-13 06:59:04.868606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.627 [2024-07-13 06:59:04.868620] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.627 [2024-07-13 06:59:04.868645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.627 [2024-07-13 06:59:04.868655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.627 [2024-07-13 06:59:04.868731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.627 [2024-07-13 06:59:04.868797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.627 [2024-07-13 06:59:04.868863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.627 [2024-07-13 06:59:04.868872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.627 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.627 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:11:35.627 06:59:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.627 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:35.627 06:59:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.627 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.627 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:35.627 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:35.627 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:35.885 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:35.885 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:35.885 "nvmf_tgt_1" 00:11:35.885 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:35.885 "nvmf_tgt_2" 00:11:36.143 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:36.143 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:36.143 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:36.143 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:36.143 true 00:11:36.143 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:36.401 true 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.401 rmmod nvme_tcp 00:11:36.401 rmmod nvme_fabrics 00:11:36.401 rmmod nvme_keyring 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1442240 ']' 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1442240 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1442240 ']' 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1442240 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.401 06:59:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1442240 00:11:36.661 06:59:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:36.661 06:59:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:36.661 06:59:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1442240' 00:11:36.661 killing process with pid 1442240 00:11:36.661 06:59:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1442240 00:11:36.661 06:59:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1442240 00:11:36.661 06:59:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.661 06:59:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.661 06:59:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.661 06:59:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.661 06:59:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.661 06:59:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.661 06:59:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.661 06:59:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.199 06:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.199 00:11:39.199 real 0m5.695s 00:11:39.199 user 0m6.301s 00:11:39.199 sys 0m1.916s 00:11:39.199 06:59:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.199 06:59:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:39.199 ************************************ 00:11:39.199 END TEST nvmf_multitarget 00:11:39.199 ************************************ 00:11:39.199 06:59:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:39.199 06:59:08 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:39.199 06:59:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:39.199 06:59:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.199 06:59:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.199 ************************************ 00:11:39.199 START TEST nvmf_rpc 00:11:39.199 ************************************ 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:39.199 * Looking for test storage... 00:11:39.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.199 06:59:08 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.200 06:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:41.103 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:41.103 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:41.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:41.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:41.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:11:41.103 00:11:41.103 --- 10.0.0.2 ping statistics --- 00:11:41.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.103 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:11:41.103 00:11:41.103 --- 10.0.0.1 ping statistics --- 00:11:41.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.103 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1444337 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1444337 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1444337 ']' 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.103 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.103 [2024-07-13 06:59:10.521672] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:41.103 [2024-07-13 06:59:10.521760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.103 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.362 [2024-07-13 06:59:10.560716] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:41.362 [2024-07-13 06:59:10.592822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.362 [2024-07-13 06:59:10.688054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.362 [2024-07-13 06:59:10.688115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.362 [2024-07-13 06:59:10.688132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.362 [2024-07-13 06:59:10.688146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.362 [2024-07-13 06:59:10.688166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.362 [2024-07-13 06:59:10.688222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.362 [2024-07-13 06:59:10.688273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.362 [2024-07-13 06:59:10.688333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.362 [2024-07-13 06:59:10.688336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.362 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.362 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:41.362 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:41.362 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:41.362 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:41.621 "tick_rate": 2700000000, 00:11:41.621 "poll_groups": [ 00:11:41.621 { 00:11:41.621 "name": "nvmf_tgt_poll_group_000", 00:11:41.621 "admin_qpairs": 0, 00:11:41.621 "io_qpairs": 0, 00:11:41.621 "current_admin_qpairs": 0, 00:11:41.621 "current_io_qpairs": 0, 00:11:41.621 "pending_bdev_io": 0, 00:11:41.621 "completed_nvme_io": 0, 00:11:41.621 "transports": [] 00:11:41.621 }, 00:11:41.621 { 00:11:41.621 "name": "nvmf_tgt_poll_group_001", 00:11:41.621 "admin_qpairs": 0, 00:11:41.621 "io_qpairs": 0, 00:11:41.621 "current_admin_qpairs": 0, 00:11:41.621 "current_io_qpairs": 0, 00:11:41.621 "pending_bdev_io": 0, 00:11:41.621 "completed_nvme_io": 0, 00:11:41.621 "transports": [] 00:11:41.621 }, 00:11:41.621 { 00:11:41.621 "name": "nvmf_tgt_poll_group_002", 00:11:41.621 "admin_qpairs": 0, 00:11:41.621 "io_qpairs": 0, 00:11:41.621 "current_admin_qpairs": 0, 00:11:41.621 "current_io_qpairs": 0, 00:11:41.621 "pending_bdev_io": 0, 00:11:41.621 "completed_nvme_io": 0, 00:11:41.621 "transports": [] 00:11:41.621 }, 00:11:41.621 { 00:11:41.621 "name": "nvmf_tgt_poll_group_003", 00:11:41.621 "admin_qpairs": 0, 00:11:41.621 "io_qpairs": 0, 00:11:41.621 "current_admin_qpairs": 0, 00:11:41.621 "current_io_qpairs": 0, 00:11:41.621 "pending_bdev_io": 0, 00:11:41.621 "completed_nvme_io": 0, 00:11:41.621 "transports": [] 00:11:41.621 } 00:11:41.621 ] 00:11:41.621 }' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.621 [2024-07-13 06:59:10.922841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:41.621 "tick_rate": 2700000000, 00:11:41.621 "poll_groups": [ 00:11:41.621 { 00:11:41.621 "name": "nvmf_tgt_poll_group_000", 00:11:41.621 "admin_qpairs": 0, 00:11:41.621 "io_qpairs": 0, 00:11:41.621 "current_admin_qpairs": 0, 00:11:41.621 "current_io_qpairs": 0, 00:11:41.621 "pending_bdev_io": 0, 00:11:41.621 "completed_nvme_io": 0, 00:11:41.621 "transports": [ 00:11:41.621 { 00:11:41.621 "trtype": "TCP" 00:11:41.621 } 00:11:41.621 ] 00:11:41.621 }, 00:11:41.621 { 00:11:41.621 "name": "nvmf_tgt_poll_group_001", 00:11:41.621 "admin_qpairs": 0, 00:11:41.621 "io_qpairs": 0, 00:11:41.621 "current_admin_qpairs": 0, 00:11:41.621 "current_io_qpairs": 0, 00:11:41.621 "pending_bdev_io": 0, 00:11:41.621 "completed_nvme_io": 0, 00:11:41.621 "transports": [ 00:11:41.621 { 00:11:41.621 "trtype": "TCP" 00:11:41.621 } 00:11:41.621 ] 00:11:41.621 }, 00:11:41.621 { 00:11:41.621 "name": "nvmf_tgt_poll_group_002", 00:11:41.621 "admin_qpairs": 0, 00:11:41.621 "io_qpairs": 0, 00:11:41.621 "current_admin_qpairs": 0, 00:11:41.621 "current_io_qpairs": 0, 00:11:41.621 "pending_bdev_io": 0, 00:11:41.621 "completed_nvme_io": 0, 00:11:41.621 "transports": [ 00:11:41.621 { 00:11:41.621 "trtype": "TCP" 00:11:41.621 } 00:11:41.621 ] 00:11:41.621 }, 00:11:41.621 { 00:11:41.621 "name": "nvmf_tgt_poll_group_003", 00:11:41.621 "admin_qpairs": 0, 00:11:41.621 "io_qpairs": 0, 00:11:41.621 "current_admin_qpairs": 0, 00:11:41.621 "current_io_qpairs": 0, 00:11:41.621 "pending_bdev_io": 0, 00:11:41.621 "completed_nvme_io": 0, 00:11:41.621 "transports": [ 00:11:41.621 { 00:11:41.621 "trtype": "TCP" 00:11:41.621 } 00:11:41.621 ] 00:11:41.621 } 00:11:41.621 ] 00:11:41.621 }' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:41.621 06:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.621 Malloc1 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.621 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.879 [2024-07-13 06:59:11.076039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:41.879 [2024-07-13 06:59:11.098540] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:41.879 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:41.879 could not add new controller: failed to write to nvme-fabrics device 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.879 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.880 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.445 06:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.445 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:42.445 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.445 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:42.445 06:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.970 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.971 [2024-07-13 06:59:13.966101] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:44.971 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:44.971 could not add new controller: failed to write to nvme-fabrics device 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.971 06:59:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.537 06:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.537 06:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:45.537 06:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.537 06:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:45.537 06:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:47.435 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.436 [2024-07-13 06:59:16.797673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.436 06:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.367 06:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.367 06:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.367 06:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.367 06:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:48.367 06:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.260 06:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.261 [2024-07-13 06:59:19.642418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.261 06:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.191 06:59:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.191 06:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:51.191 06:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.191 06:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:51.191 06:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.118 [2024-07-13 06:59:22.488371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.118 06:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.050 06:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.050 06:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.050 06:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.050 06:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.050 06:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.946 [2024-07-13 06:59:25.299960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.946 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.879 06:59:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:56.879 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:56.880 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.880 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:56.880 06:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:58.780 06:59:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:58.780 06:59:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:58.780 06:59:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.780 [2024-07-13 06:59:28.139437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.780 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.347 06:59:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.347 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:59.347 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.347 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:59.347 06:59:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 [2024-07-13 06:59:30.910512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 [2024-07-13 06:59:30.958571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.877 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.878 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.878 06:59:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.878 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 [2024-07-13 06:59:31.006749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 [2024-07-13 06:59:31.054913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 [2024-07-13 06:59:31.103082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:01.878 "tick_rate": 2700000000, 00:12:01.878 "poll_groups": [ 00:12:01.878 { 00:12:01.878 "name": "nvmf_tgt_poll_group_000", 00:12:01.878 "admin_qpairs": 2, 00:12:01.878 "io_qpairs": 84, 00:12:01.878 "current_admin_qpairs": 0, 00:12:01.878 "current_io_qpairs": 0, 00:12:01.878 "pending_bdev_io": 0, 00:12:01.878 "completed_nvme_io": 186, 00:12:01.878 "transports": [ 00:12:01.878 { 00:12:01.878 "trtype": "TCP" 00:12:01.878 } 00:12:01.878 ] 00:12:01.878 }, 00:12:01.878 { 00:12:01.878 "name": "nvmf_tgt_poll_group_001", 00:12:01.878 "admin_qpairs": 2, 00:12:01.878 "io_qpairs": 84, 00:12:01.878 "current_admin_qpairs": 0, 00:12:01.878 "current_io_qpairs": 0, 00:12:01.878 "pending_bdev_io": 0, 00:12:01.878 "completed_nvme_io": 214, 00:12:01.878 "transports": [ 00:12:01.878 { 00:12:01.878 "trtype": "TCP" 00:12:01.878 } 00:12:01.878 ] 00:12:01.878 }, 00:12:01.878 { 00:12:01.878 "name": "nvmf_tgt_poll_group_002", 00:12:01.878 "admin_qpairs": 1, 00:12:01.878 "io_qpairs": 84, 00:12:01.878 "current_admin_qpairs": 0, 00:12:01.878 "current_io_qpairs": 0, 00:12:01.878 "pending_bdev_io": 0, 00:12:01.878 "completed_nvme_io": 135, 00:12:01.878 "transports": [ 00:12:01.878 { 00:12:01.878 "trtype": "TCP" 00:12:01.878 } 00:12:01.878 ] 00:12:01.878 }, 00:12:01.878 { 00:12:01.878 "name": "nvmf_tgt_poll_group_003", 00:12:01.878 "admin_qpairs": 2, 00:12:01.878 "io_qpairs": 84, 00:12:01.878 "current_admin_qpairs": 0, 00:12:01.878 "current_io_qpairs": 0, 00:12:01.878 "pending_bdev_io": 0, 00:12:01.878 "completed_nvme_io": 151, 00:12:01.878 "transports": [ 00:12:01.878 { 00:12:01.878 "trtype": "TCP" 00:12:01.878 } 00:12:01.878 ] 00:12:01.878 } 00:12:01.878 ] 00:12:01.878 }' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.878 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.878 rmmod nvme_tcp 00:12:01.878 rmmod nvme_fabrics 00:12:01.878 rmmod nvme_keyring 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1444337 ']' 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1444337 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1444337 ']' 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1444337 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1444337 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1444337' 00:12:01.879 killing process with pid 1444337 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1444337 00:12:01.879 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1444337 00:12:02.137 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.137 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.137 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.137 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.137 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.137 06:59:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.137 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.137 06:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.672 06:59:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:04.672 00:12:04.672 real 0m25.409s 00:12:04.672 user 1m22.581s 00:12:04.672 sys 0m4.201s 00:12:04.672 06:59:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:04.672 06:59:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.672 ************************************ 00:12:04.672 END TEST nvmf_rpc 00:12:04.672 ************************************ 00:12:04.672 06:59:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:04.672 06:59:33 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:04.672 06:59:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:04.672 06:59:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.672 06:59:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:04.672 ************************************ 00:12:04.672 START TEST nvmf_invalid 00:12:04.672 ************************************ 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:04.672 * Looking for test storage... 00:12:04.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.672 06:59:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:04.673 06:59:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:06.575 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:06.575 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:06.575 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:06.575 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.575 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:06.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:12:06.576 00:12:06.576 --- 10.0.0.2 ping statistics --- 00:12:06.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.576 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:12:06.576 00:12:06.576 --- 10.0.0.1 ping statistics --- 00:12:06.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.576 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1448832 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1448832 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1448832 ']' 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.576 06:59:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:06.576 [2024-07-13 06:59:35.898897] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:06.576 [2024-07-13 06:59:35.898988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.576 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.576 [2024-07-13 06:59:35.938898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:06.576 [2024-07-13 06:59:35.965624] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.834 [2024-07-13 06:59:36.055829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.834 [2024-07-13 06:59:36.055924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.834 [2024-07-13 06:59:36.055940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.834 [2024-07-13 06:59:36.055953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.834 [2024-07-13 06:59:36.055963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.834 [2024-07-13 06:59:36.056021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.834 [2024-07-13 06:59:36.056083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.834 [2024-07-13 06:59:36.056149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.834 [2024-07-13 06:59:36.056151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.834 06:59:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.834 06:59:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:12:06.834 06:59:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.834 06:59:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:06.834 06:59:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:06.834 06:59:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.834 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:06.834 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2613 00:12:07.093 [2024-07-13 06:59:36.436189] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:07.093 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:07.093 { 00:12:07.093 "nqn": "nqn.2016-06.io.spdk:cnode2613", 00:12:07.093 "tgt_name": "foobar", 00:12:07.093 "method": "nvmf_create_subsystem", 00:12:07.093 "req_id": 1 00:12:07.093 } 00:12:07.093 Got JSON-RPC error response 00:12:07.093 response: 00:12:07.093 { 00:12:07.093 "code": -32603, 00:12:07.093 "message": "Unable to find target foobar" 00:12:07.093 }' 00:12:07.093 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:07.093 { 00:12:07.093 "nqn": "nqn.2016-06.io.spdk:cnode2613", 00:12:07.093 "tgt_name": "foobar", 00:12:07.093 "method": "nvmf_create_subsystem", 00:12:07.093 "req_id": 1 00:12:07.093 } 00:12:07.093 Got JSON-RPC error response 00:12:07.093 response: 00:12:07.093 { 00:12:07.093 "code": -32603, 00:12:07.093 "message": "Unable to find target foobar" 00:12:07.093 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:07.093 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:07.093 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27053 00:12:07.351 [2024-07-13 06:59:36.733192] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27053: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:07.351 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:07.351 { 00:12:07.351 "nqn": "nqn.2016-06.io.spdk:cnode27053", 00:12:07.351 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:07.351 "method": "nvmf_create_subsystem", 00:12:07.351 "req_id": 1 00:12:07.351 } 00:12:07.351 Got JSON-RPC error response 00:12:07.351 response: 00:12:07.351 { 00:12:07.351 "code": -32602, 00:12:07.351 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:07.351 }' 00:12:07.351 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:07.351 { 00:12:07.351 "nqn": "nqn.2016-06.io.spdk:cnode27053", 00:12:07.351 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:07.351 "method": "nvmf_create_subsystem", 00:12:07.351 "req_id": 1 00:12:07.351 } 00:12:07.351 Got JSON-RPC error response 00:12:07.351 response: 00:12:07.351 { 00:12:07.351 "code": -32602, 00:12:07.351 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:07.351 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:07.351 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:07.351 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28596 00:12:07.610 [2024-07-13 06:59:36.977971] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28596: invalid model number 'SPDK_Controller' 00:12:07.610 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:07.610 { 00:12:07.610 "nqn": "nqn.2016-06.io.spdk:cnode28596", 00:12:07.610 "model_number": "SPDK_Controller\u001f", 00:12:07.610 "method": "nvmf_create_subsystem", 00:12:07.610 "req_id": 1 00:12:07.610 } 00:12:07.610 Got JSON-RPC error response 00:12:07.610 response: 00:12:07.610 { 00:12:07.610 "code": -32602, 00:12:07.610 "message": "Invalid MN SPDK_Controller\u001f" 00:12:07.610 }' 00:12:07.610 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:07.610 { 00:12:07.610 "nqn": "nqn.2016-06.io.spdk:cnode28596", 00:12:07.610 "model_number": "SPDK_Controller\u001f", 00:12:07.610 "method": "nvmf_create_subsystem", 00:12:07.610 "req_id": 1 00:12:07.610 } 00:12:07.610 Got JSON-RPC error response 00:12:07.610 response: 00:12:07.610 { 00:12:07.610 "code": -32602, 00:12:07.610 "message": "Invalid MN SPDK_Controller\u001f" 00:12:07.610 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:07.610 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:07.610 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:07.610 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:07.610 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:07.610 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:07.610 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:07.610 06:59:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'nXV7oO:AFiG,J:YJpF*`C' 00:12:07.610 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'nXV7oO:AFiG,J:YJpF*`C' nqn.2016-06.io.spdk:cnode7967 00:12:07.868 [2024-07-13 06:59:37.287010] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7967: invalid serial number 'nXV7oO:AFiG,J:YJpF*`C' 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:07.869 { 00:12:07.869 "nqn": "nqn.2016-06.io.spdk:cnode7967", 00:12:07.869 "serial_number": "nXV7oO:AFiG,J:YJpF*`C", 00:12:07.869 "method": "nvmf_create_subsystem", 00:12:07.869 "req_id": 1 00:12:07.869 } 00:12:07.869 Got JSON-RPC error response 00:12:07.869 response: 00:12:07.869 { 00:12:07.869 "code": -32602, 00:12:07.869 "message": "Invalid SN nXV7oO:AFiG,J:YJpF*`C" 00:12:07.869 }' 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:07.869 { 00:12:07.869 "nqn": "nqn.2016-06.io.spdk:cnode7967", 00:12:07.869 "serial_number": "nXV7oO:AFiG,J:YJpF*`C", 00:12:07.869 "method": "nvmf_create_subsystem", 00:12:07.869 "req_id": 1 00:12:07.869 } 00:12:07.869 Got JSON-RPC error response 00:12:07.869 response: 00:12:07.869 { 00:12:07.869 "code": -32602, 00:12:07.869 "message": "Invalid SN nXV7oO:AFiG,J:YJpF*`C" 00:12:07.869 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.869 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.128 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '"O2Ac?v(?exJq-:QI`whQ7}<'\''P\)8yB?e|/_' 00:12:08.129 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '"O2Ac?v(?exJq-:QI`whQ7}<'\''P\)8yB?e|/_' nqn.2016-06.io.spdk:cnode26703 00:12:08.387 [2024-07-13 06:59:37.660229] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26703: invalid model number '"O2Ac?v(?exJq-:QI`whQ7}<'P\)8yB?e|/_' 00:12:08.387 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:08.387 { 00:12:08.387 "nqn": "nqn.2016-06.io.spdk:cnode26703", 00:12:08.387 "model_number": "\"O2Ac?v(?exJq-:QI`whQ7}<'\''P\\\u007f)8yB?e|/_", 00:12:08.387 "method": "nvmf_create_subsystem", 00:12:08.387 "req_id": 1 00:12:08.387 } 00:12:08.387 Got JSON-RPC error response 00:12:08.387 response: 00:12:08.387 { 00:12:08.387 "code": -32602, 00:12:08.387 "message": "Invalid MN \"O2Ac?v(?exJq-:QI`whQ7}<'\''P\\\u007f)8yB?e|/_" 00:12:08.387 }' 00:12:08.387 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:08.387 { 00:12:08.387 "nqn": "nqn.2016-06.io.spdk:cnode26703", 00:12:08.387 "model_number": "\"O2Ac?v(?exJq-:QI`whQ7}<'P\\\u007f)8yB?e|/_", 00:12:08.387 "method": "nvmf_create_subsystem", 00:12:08.387 "req_id": 1 00:12:08.387 } 00:12:08.387 Got JSON-RPC error response 00:12:08.387 response: 00:12:08.387 { 00:12:08.387 "code": -32602, 00:12:08.387 "message": "Invalid MN \"O2Ac?v(?exJq-:QI`whQ7}<'P\\\u007f)8yB?e|/_" 00:12:08.387 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:08.387 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:08.645 [2024-07-13 06:59:37.909111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.645 06:59:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:08.902 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:08.902 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:08.902 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:08.902 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:08.902 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:09.160 [2024-07-13 06:59:38.410762] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:09.160 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:09.160 { 00:12:09.160 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:09.160 "listen_address": { 00:12:09.160 "trtype": "tcp", 00:12:09.160 "traddr": "", 00:12:09.160 "trsvcid": "4421" 00:12:09.160 }, 00:12:09.160 "method": "nvmf_subsystem_remove_listener", 00:12:09.160 "req_id": 1 00:12:09.160 } 00:12:09.160 Got JSON-RPC error response 00:12:09.160 response: 00:12:09.160 { 00:12:09.160 "code": -32602, 00:12:09.160 "message": "Invalid parameters" 00:12:09.160 }' 00:12:09.160 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:09.160 { 00:12:09.160 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:09.160 "listen_address": { 00:12:09.160 "trtype": "tcp", 00:12:09.160 "traddr": "", 00:12:09.160 "trsvcid": "4421" 00:12:09.160 }, 00:12:09.160 "method": "nvmf_subsystem_remove_listener", 00:12:09.160 "req_id": 1 00:12:09.160 } 00:12:09.160 Got JSON-RPC error response 00:12:09.160 response: 00:12:09.160 { 00:12:09.160 "code": -32602, 00:12:09.160 "message": "Invalid parameters" 00:12:09.160 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:09.160 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12453 -i 0 00:12:09.436 [2024-07-13 06:59:38.663592] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12453: invalid cntlid range [0-65519] 00:12:09.436 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:09.436 { 00:12:09.436 "nqn": "nqn.2016-06.io.spdk:cnode12453", 00:12:09.436 "min_cntlid": 0, 00:12:09.436 "method": "nvmf_create_subsystem", 00:12:09.436 "req_id": 1 00:12:09.436 } 00:12:09.436 Got JSON-RPC error response 00:12:09.436 response: 00:12:09.436 { 00:12:09.436 "code": -32602, 00:12:09.436 "message": "Invalid cntlid range [0-65519]" 00:12:09.436 }' 00:12:09.436 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:09.436 { 00:12:09.436 "nqn": "nqn.2016-06.io.spdk:cnode12453", 00:12:09.436 "min_cntlid": 0, 00:12:09.436 "method": "nvmf_create_subsystem", 00:12:09.436 "req_id": 1 00:12:09.436 } 00:12:09.436 Got JSON-RPC error response 00:12:09.436 response: 00:12:09.436 { 00:12:09.436 "code": -32602, 00:12:09.436 "message": "Invalid cntlid range [0-65519]" 00:12:09.436 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.436 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27074 -i 65520 00:12:09.694 [2024-07-13 06:59:38.912358] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27074: invalid cntlid range [65520-65519] 00:12:09.694 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:09.694 { 00:12:09.694 "nqn": "nqn.2016-06.io.spdk:cnode27074", 00:12:09.694 "min_cntlid": 65520, 00:12:09.694 "method": "nvmf_create_subsystem", 00:12:09.694 "req_id": 1 00:12:09.694 } 00:12:09.694 Got JSON-RPC error response 00:12:09.694 response: 00:12:09.694 { 00:12:09.694 "code": -32602, 00:12:09.694 "message": "Invalid cntlid range [65520-65519]" 00:12:09.694 }' 00:12:09.694 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:09.694 { 00:12:09.694 "nqn": "nqn.2016-06.io.spdk:cnode27074", 00:12:09.694 "min_cntlid": 65520, 00:12:09.694 "method": "nvmf_create_subsystem", 00:12:09.694 "req_id": 1 00:12:09.694 } 00:12:09.694 Got JSON-RPC error response 00:12:09.694 response: 00:12:09.694 { 00:12:09.694 "code": -32602, 00:12:09.694 "message": "Invalid cntlid range [65520-65519]" 00:12:09.694 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.694 06:59:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14940 -I 0 00:12:09.952 [2024-07-13 06:59:39.153236] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14940: invalid cntlid range [1-0] 00:12:09.952 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:09.952 { 00:12:09.952 "nqn": "nqn.2016-06.io.spdk:cnode14940", 00:12:09.952 "max_cntlid": 0, 00:12:09.952 "method": "nvmf_create_subsystem", 00:12:09.952 "req_id": 1 00:12:09.952 } 00:12:09.952 Got JSON-RPC error response 00:12:09.952 response: 00:12:09.952 { 00:12:09.952 "code": -32602, 00:12:09.952 "message": "Invalid cntlid range [1-0]" 00:12:09.952 }' 00:12:09.952 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:09.952 { 00:12:09.952 "nqn": "nqn.2016-06.io.spdk:cnode14940", 00:12:09.952 "max_cntlid": 0, 00:12:09.952 "method": "nvmf_create_subsystem", 00:12:09.952 "req_id": 1 00:12:09.952 } 00:12:09.952 Got JSON-RPC error response 00:12:09.952 response: 00:12:09.952 { 00:12:09.952 "code": -32602, 00:12:09.952 "message": "Invalid cntlid range [1-0]" 00:12:09.952 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.952 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3579 -I 65520 00:12:10.209 [2024-07-13 06:59:39.450212] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3579: invalid cntlid range [1-65520] 00:12:10.209 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:10.209 { 00:12:10.209 "nqn": "nqn.2016-06.io.spdk:cnode3579", 00:12:10.209 "max_cntlid": 65520, 00:12:10.209 "method": "nvmf_create_subsystem", 00:12:10.209 "req_id": 1 00:12:10.209 } 00:12:10.209 Got JSON-RPC error response 00:12:10.209 response: 00:12:10.209 { 00:12:10.209 "code": -32602, 00:12:10.209 "message": "Invalid cntlid range [1-65520]" 00:12:10.209 }' 00:12:10.209 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:10.209 { 00:12:10.209 "nqn": "nqn.2016-06.io.spdk:cnode3579", 00:12:10.209 "max_cntlid": 65520, 00:12:10.209 "method": "nvmf_create_subsystem", 00:12:10.209 "req_id": 1 00:12:10.209 } 00:12:10.209 Got JSON-RPC error response 00:12:10.209 response: 00:12:10.209 { 00:12:10.209 "code": -32602, 00:12:10.209 "message": "Invalid cntlid range [1-65520]" 00:12:10.209 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.209 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8571 -i 6 -I 5 00:12:10.468 [2024-07-13 06:59:39.711063] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8571: invalid cntlid range [6-5] 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:10.468 { 00:12:10.468 "nqn": "nqn.2016-06.io.spdk:cnode8571", 00:12:10.468 "min_cntlid": 6, 00:12:10.468 "max_cntlid": 5, 00:12:10.468 "method": "nvmf_create_subsystem", 00:12:10.468 "req_id": 1 00:12:10.468 } 00:12:10.468 Got JSON-RPC error response 00:12:10.468 response: 00:12:10.468 { 00:12:10.468 "code": -32602, 00:12:10.468 "message": "Invalid cntlid range [6-5]" 00:12:10.468 }' 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:10.468 { 00:12:10.468 "nqn": "nqn.2016-06.io.spdk:cnode8571", 00:12:10.468 "min_cntlid": 6, 00:12:10.468 "max_cntlid": 5, 00:12:10.468 "method": "nvmf_create_subsystem", 00:12:10.468 "req_id": 1 00:12:10.468 } 00:12:10.468 Got JSON-RPC error response 00:12:10.468 response: 00:12:10.468 { 00:12:10.468 "code": -32602, 00:12:10.468 "message": "Invalid cntlid range [6-5]" 00:12:10.468 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:10.468 { 00:12:10.468 "name": "foobar", 00:12:10.468 "method": "nvmf_delete_target", 00:12:10.468 "req_id": 1 00:12:10.468 } 00:12:10.468 Got JSON-RPC error response 00:12:10.468 response: 00:12:10.468 { 00:12:10.468 "code": -32602, 00:12:10.468 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:10.468 }' 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:10.468 { 00:12:10.468 "name": "foobar", 00:12:10.468 "method": "nvmf_delete_target", 00:12:10.468 "req_id": 1 00:12:10.468 } 00:12:10.468 Got JSON-RPC error response 00:12:10.468 response: 00:12:10.468 { 00:12:10.468 "code": -32602, 00:12:10.468 "message": "The specified target doesn't exist, cannot delete it." 00:12:10.468 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.468 rmmod nvme_tcp 00:12:10.468 rmmod nvme_fabrics 00:12:10.468 rmmod nvme_keyring 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1448832 ']' 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1448832 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1448832 ']' 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1448832 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1448832 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1448832' 00:12:10.468 killing process with pid 1448832 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1448832 00:12:10.468 06:59:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1448832 00:12:10.726 06:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:10.726 06:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:10.726 06:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:10.726 06:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.726 06:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.726 06:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.726 06:59:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.727 06:59:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.260 06:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.260 00:12:13.260 real 0m8.545s 00:12:13.260 user 0m19.939s 00:12:13.260 sys 0m2.349s 00:12:13.260 06:59:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.260 06:59:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:13.260 ************************************ 00:12:13.260 END TEST nvmf_invalid 00:12:13.260 ************************************ 00:12:13.260 06:59:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:13.260 06:59:42 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:13.260 06:59:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:13.260 06:59:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.260 06:59:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.260 ************************************ 00:12:13.260 START TEST nvmf_abort 00:12:13.260 ************************************ 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:13.260 * Looking for test storage... 00:12:13.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.260 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.261 06:59:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.157 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:15.158 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:15.158 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:15.158 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:15.158 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:15.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:12:15.158 00:12:15.158 --- 10.0.0.2 ping statistics --- 00:12:15.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.158 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:12:15.158 00:12:15.158 --- 10.0.0.1 ping statistics --- 00:12:15.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.158 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1451466 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1451466 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1451466 ']' 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:15.158 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.158 [2024-07-13 06:59:44.525174] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:15.158 [2024-07-13 06:59:44.525272] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.158 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.158 [2024-07-13 06:59:44.563242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:15.159 [2024-07-13 06:59:44.595389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:15.416 [2024-07-13 06:59:44.689345] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.416 [2024-07-13 06:59:44.689411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.416 [2024-07-13 06:59:44.689437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.416 [2024-07-13 06:59:44.689450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.416 [2024-07-13 06:59:44.689462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.416 [2024-07-13 06:59:44.689556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.416 [2024-07-13 06:59:44.689609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.416 [2024-07-13 06:59:44.689612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.416 [2024-07-13 06:59:44.845618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.416 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.674 Malloc0 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.674 Delay0 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.674 [2024-07-13 06:59:44.919498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.674 06:59:44 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:15.674 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.674 [2024-07-13 06:59:45.025028] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:18.201 Initializing NVMe Controllers 00:12:18.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:18.201 controller IO queue size 128 less than required 00:12:18.201 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:18.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:18.201 Initialization complete. Launching workers. 00:12:18.201 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32772 00:12:18.201 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32833, failed to submit 62 00:12:18.201 success 32776, unsuccess 57, failed 0 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.201 rmmod nvme_tcp 00:12:18.201 rmmod nvme_fabrics 00:12:18.201 rmmod nvme_keyring 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1451466 ']' 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1451466 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1451466 ']' 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1451466 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1451466 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1451466' 00:12:18.201 killing process with pid 1451466 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1451466 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1451466 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.201 06:59:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.730 06:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:20.730 00:12:20.730 real 0m7.385s 00:12:20.730 user 0m11.038s 00:12:20.730 sys 0m2.488s 00:12:20.730 06:59:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.730 06:59:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:20.730 ************************************ 00:12:20.730 END TEST nvmf_abort 00:12:20.730 ************************************ 00:12:20.730 06:59:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:20.730 06:59:49 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:20.730 06:59:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:20.730 06:59:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.730 06:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:20.730 ************************************ 00:12:20.730 START TEST nvmf_ns_hotplug_stress 00:12:20.730 ************************************ 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:20.730 * Looking for test storage... 00:12:20.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.730 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.731 06:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.635 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:22.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:22.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:22.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:22.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:22.636 06:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:22.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:12:22.636 00:12:22.636 --- 10.0.0.2 ping statistics --- 00:12:22.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.636 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:22.636 00:12:22.636 --- 10.0.0.1 ping statistics --- 00:12:22.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.636 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1453804 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1453804 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1453804 ']' 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.636 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.895 [2024-07-13 06:59:52.110957] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:22.895 [2024-07-13 06:59:52.111049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.895 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.895 [2024-07-13 06:59:52.152178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:22.895 [2024-07-13 06:59:52.181137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:22.895 [2024-07-13 06:59:52.270288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.895 [2024-07-13 06:59:52.270342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.895 [2024-07-13 06:59:52.270356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.895 [2024-07-13 06:59:52.270367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.895 [2024-07-13 06:59:52.270377] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.895 [2024-07-13 06:59:52.270492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.895 [2024-07-13 06:59:52.270557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.895 [2024-07-13 06:59:52.270559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.153 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.153 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:12:23.153 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.153 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:23.153 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.153 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.153 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:23.153 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:23.411 [2024-07-13 06:59:52.612319] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.411 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:23.667 06:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.667 [2024-07-13 06:59:53.107161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.924 06:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:23.924 06:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:24.182 Malloc0 00:12:24.439 06:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:24.439 Delay0 00:12:24.439 06:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.697 06:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:25.262 NULL1 00:12:25.262 06:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:25.262 06:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1454103 00:12:25.262 06:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:25.262 06:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:25.262 06:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.262 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.670 Read completed with error (sct=0, sc=11) 00:12:26.670 06:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.927 06:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:26.927 06:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:27.184 true 00:12:27.184 06:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:27.184 06:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.747 06:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.312 06:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:28.312 06:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:28.617 true 00:12:28.617 06:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:28.617 06:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.617 06:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.873 06:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:28.873 06:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:29.130 true 00:12:29.130 06:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:29.130 06:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.387 06:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.644 06:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:29.644 06:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:29.901 true 00:12:29.901 06:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:29.901 06:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:31.093 07:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:31.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:31.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:31.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:31.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:31.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:31.351 07:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:31.351 07:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:31.608 true 00:12:31.608 07:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:31.608 07:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:32.173 07:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:32.430 07:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:32.430 07:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:32.686 true 00:12:32.686 07:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:32.686 07:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.942 07:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.198 07:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:33.198 07:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:33.454 true 00:12:33.454 07:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:33.454 07:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.385 07:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.642 07:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:34.642 07:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:34.898 true 00:12:34.898 07:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:34.898 07:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.154 07:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.411 07:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:35.411 07:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:35.667 true 00:12:35.667 07:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:35.667 07:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.925 07:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.182 07:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:36.182 07:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:36.440 true 00:12:36.440 07:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:36.440 07:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.372 07:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.629 07:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:37.629 07:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:37.886 true 00:12:37.886 07:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:37.886 07:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.818 07:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:38.818 07:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:38.818 07:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:39.075 true 00:12:39.075 07:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:39.075 07:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.332 07:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.589 07:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:39.589 07:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:39.848 true 00:12:39.848 07:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:39.848 07:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.146 07:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.403 07:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:40.403 07:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:40.661 true 00:12:40.661 07:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:40.661 07:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.592 07:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.849 07:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:41.849 07:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:42.105 true 00:12:42.106 07:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:42.106 07:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.362 07:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.619 07:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:42.619 07:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:42.876 true 00:12:42.876 07:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:42.876 07:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.807 07:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.064 07:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:44.064 07:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:44.321 true 00:12:44.321 07:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:44.321 07:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.578 07:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.835 07:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:44.835 07:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:45.092 true 00:12:45.092 07:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:45.092 07:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.024 07:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.281 07:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:46.281 07:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:46.538 true 00:12:46.538 07:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:46.538 07:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.795 07:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.795 07:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:46.795 07:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:47.052 true 00:12:47.052 07:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:47.052 07:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.004 07:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.261 07:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:48.261 07:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:48.518 true 00:12:48.518 07:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:48.518 07:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.775 07:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.033 07:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:49.033 07:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:49.290 true 00:12:49.290 07:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:49.290 07:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.223 07:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.480 07:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:50.480 07:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:50.738 true 00:12:50.738 07:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:50.738 07:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.995 07:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.252 07:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:51.252 07:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:51.509 true 00:12:51.509 07:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:51.509 07:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.440 07:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.702 07:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:52.702 07:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:52.702 true 00:12:52.960 07:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:52.960 07:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.960 07:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.216 07:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:53.216 07:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:53.537 true 00:12:53.537 07:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:53.537 07:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.467 07:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.724 07:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:54.724 07:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:54.980 true 00:12:54.980 07:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:54.980 07:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.237 07:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.494 07:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:55.494 07:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:55.751 Initializing NVMe Controllers 00:12:55.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:55.751 Controller IO queue size 128, less than required. 00:12:55.752 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:55.752 Controller IO queue size 128, less than required. 00:12:55.752 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:55.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:55.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:55.752 Initialization complete. Launching workers. 00:12:55.752 ======================================================== 00:12:55.752 Latency(us) 00:12:55.752 Device Information : IOPS MiB/s Average min max 00:12:55.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1100.97 0.54 61770.22 2713.97 1044677.91 00:12:55.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10895.64 5.32 11747.57 3190.01 449948.25 00:12:55.752 ======================================================== 00:12:55.752 Total : 11996.61 5.86 16338.33 2713.97 1044677.91 00:12:55.752 00:12:55.752 true 00:12:55.752 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1454103 00:12:55.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1454103) - No such process 00:12:55.752 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1454103 00:12:55.752 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.009 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:56.267 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:56.267 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:56.267 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:56.267 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:56.267 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:56.525 null0 00:12:56.525 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:56.525 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:56.525 07:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:56.783 null1 00:12:56.783 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:56.783 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:56.783 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:57.040 null2 00:12:57.040 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:57.040 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:57.040 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:57.297 null3 00:12:57.297 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:57.297 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:57.297 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:57.553 null4 00:12:57.553 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:57.553 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:57.553 07:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:57.810 null5 00:12:57.810 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:57.810 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:57.810 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:58.067 null6 00:12:58.067 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:58.067 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:58.067 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:58.326 null7 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:58.326 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1458758 1458759 1458761 1458763 1458765 1458767 1458769 1458771 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.327 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:58.585 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:58.585 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.585 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:58.585 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:58.585 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:58.585 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.585 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:58.585 07:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.842 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:59.099 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:59.100 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.100 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:59.100 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:59.100 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:59.100 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:59.100 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:59.100 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.358 07:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:59.616 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.616 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:59.616 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:59.616 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:59.616 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:59.616 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:59.616 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:59.616 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:59.872 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:00.130 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:00.130 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:00.130 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:00.130 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:00.130 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:00.130 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:00.130 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:00.387 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.387 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:00.387 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:00.387 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:00.387 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.387 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:00.387 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:00.644 07:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:00.902 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:00.902 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.902 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:00.902 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:00.902 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:00.902 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:00.902 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.902 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.160 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:01.418 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:01.418 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:01.418 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:01.418 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:01.418 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.418 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:01.418 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:01.418 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:01.681 07:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:01.941 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:01.941 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:01.941 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:01.941 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:01.941 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:01.941 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.941 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:01.941 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.199 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:02.456 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:02.456 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:02.456 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:02.456 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:02.457 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:02.457 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:02.457 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.457 07:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.714 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:02.972 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:02.972 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:02.972 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:02.972 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:02.972 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:02.972 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.972 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:02.972 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.229 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:03.486 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:03.486 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:03.486 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:03.486 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:03.486 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:03.486 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:03.486 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.486 07:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.745 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.745 rmmod nvme_tcp 00:13:04.002 rmmod nvme_fabrics 00:13:04.002 rmmod nvme_keyring 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1453804 ']' 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1453804 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1453804 ']' 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1453804 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1453804 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1453804' 00:13:04.002 killing process with pid 1453804 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1453804 00:13:04.002 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1453804 00:13:04.261 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.261 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:04.261 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:04.261 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.261 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.261 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.261 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.261 07:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.160 07:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.160 00:13:06.160 real 0m45.875s 00:13:06.160 user 3m28.064s 00:13:06.160 sys 0m16.546s 00:13:06.160 07:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:06.160 07:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.160 ************************************ 00:13:06.160 END TEST nvmf_ns_hotplug_stress 00:13:06.160 ************************************ 00:13:06.160 07:00:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:06.160 07:00:35 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:06.160 07:00:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:06.160 07:00:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.160 07:00:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.160 ************************************ 00:13:06.160 START TEST nvmf_connect_stress 00:13:06.160 ************************************ 00:13:06.160 07:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:06.417 * Looking for test storage... 00:13:06.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.417 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.418 07:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.373 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:08.374 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:08.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:08.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:08.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:08.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:13:08.374 00:13:08.374 --- 10.0.0.2 ping statistics --- 00:13:08.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.374 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:13:08.374 00:13:08.374 --- 10.0.0.1 ping statistics --- 00:13:08.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.374 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1461516 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1461516 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1461516 ']' 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.374 07:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.632 [2024-07-13 07:00:37.843553] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:08.632 [2024-07-13 07:00:37.843652] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.632 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.632 [2024-07-13 07:00:37.881837] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:08.632 [2024-07-13 07:00:37.914203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:08.632 [2024-07-13 07:00:38.003779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.632 [2024-07-13 07:00:38.003851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.632 [2024-07-13 07:00:38.003885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.632 [2024-07-13 07:00:38.003900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.632 [2024-07-13 07:00:38.003912] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.632 [2024-07-13 07:00:38.004006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.632 [2024-07-13 07:00:38.004122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.632 [2024-07-13 07:00:38.004124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.890 [2024-07-13 07:00:38.151288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.890 [2024-07-13 07:00:38.177016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.890 NULL1 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1461540 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.890 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.891 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.148 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.148 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:09.148 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.148 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.148 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.712 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.712 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:09.712 07:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.712 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.712 07:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.973 07:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.973 07:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:09.973 07:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.973 07:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.973 07:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.232 07:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.233 07:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:10.233 07:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.233 07:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.233 07:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.490 07:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.490 07:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:10.490 07:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.490 07:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.490 07:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.748 07:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.748 07:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:10.748 07:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.748 07:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.748 07:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.311 07:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.311 07:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:11.311 07:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.311 07:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.311 07:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.567 07:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.567 07:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:11.567 07:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.567 07:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.567 07:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.831 07:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.831 07:00:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:11.831 07:00:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.831 07:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.831 07:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.086 07:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.086 07:00:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:12.086 07:00:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.086 07:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.086 07:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.343 07:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.343 07:00:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:12.343 07:00:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.343 07:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.343 07:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.907 07:00:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.907 07:00:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:12.907 07:00:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.907 07:00:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.907 07:00:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.165 07:00:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.165 07:00:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:13.165 07:00:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.165 07:00:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.165 07:00:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.426 07:00:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.426 07:00:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:13.426 07:00:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.426 07:00:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.426 07:00:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.683 07:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.683 07:00:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:13.683 07:00:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.683 07:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.683 07:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.939 07:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.939 07:00:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:13.939 07:00:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.939 07:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.939 07:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.504 07:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.504 07:00:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:14.504 07:00:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.504 07:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.504 07:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.761 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.761 07:00:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:14.761 07:00:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.761 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.761 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.019 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.019 07:00:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:15.019 07:00:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.019 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.019 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.276 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.276 07:00:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:15.276 07:00:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.276 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.276 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.533 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.533 07:00:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:15.533 07:00:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.533 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.533 07:00:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.099 07:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.099 07:00:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:16.099 07:00:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.099 07:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.099 07:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.356 07:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.356 07:00:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:16.356 07:00:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.356 07:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.356 07:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.613 07:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.613 07:00:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:16.613 07:00:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.613 07:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.613 07:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.870 07:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.870 07:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:16.870 07:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.870 07:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.870 07:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.435 07:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.435 07:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:17.435 07:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.435 07:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.435 07:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 07:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.691 07:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:17.691 07:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.691 07:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.691 07:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.948 07:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.948 07:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:17.948 07:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.948 07:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.948 07:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.204 07:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.204 07:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:18.204 07:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.204 07:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.204 07:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.461 07:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.461 07:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:18.461 07:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.461 07:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.461 07:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.027 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.027 07:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:19.027 07:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.027 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.027 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.027 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461540 00:13:19.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1461540) - No such process 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1461540 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:19.285 rmmod nvme_tcp 00:13:19.285 rmmod nvme_fabrics 00:13:19.285 rmmod nvme_keyring 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1461516 ']' 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1461516 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1461516 ']' 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1461516 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1461516 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1461516' 00:13:19.285 killing process with pid 1461516 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1461516 00:13:19.285 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1461516 00:13:19.541 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:19.541 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:19.541 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:19.541 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.541 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:19.541 07:00:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.541 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.541 07:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.557 07:00:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:21.557 00:13:21.557 real 0m15.267s 00:13:21.557 user 0m38.183s 00:13:21.557 sys 0m5.940s 00:13:21.557 07:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.557 07:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.557 ************************************ 00:13:21.557 END TEST nvmf_connect_stress 00:13:21.557 ************************************ 00:13:21.557 07:00:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:21.557 07:00:50 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:21.557 07:00:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:21.557 07:00:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.557 07:00:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.557 ************************************ 00:13:21.557 START TEST nvmf_fused_ordering 00:13:21.557 ************************************ 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:21.557 * Looking for test storage... 00:13:21.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:21.557 07:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:23.466 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:23.466 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:23.466 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:23.466 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:23.466 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:23.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:13:23.467 00:13:23.467 --- 10.0.0.2 ping statistics --- 00:13:23.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.467 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:13:23.467 00:13:23.467 --- 10.0.0.1 ping statistics --- 00:13:23.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.467 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:23.467 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1464685 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1464685 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1464685 ']' 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.724 07:00:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.724 [2024-07-13 07:00:52.987305] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:23.724 [2024-07-13 07:00:52.987383] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.724 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.724 [2024-07-13 07:00:53.026521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:23.724 [2024-07-13 07:00:53.052872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.724 [2024-07-13 07:00:53.139977] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.724 [2024-07-13 07:00:53.140038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.724 [2024-07-13 07:00:53.140052] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.724 [2024-07-13 07:00:53.140063] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.724 [2024-07-13 07:00:53.140072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.724 [2024-07-13 07:00:53.140103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 [2024-07-13 07:00:53.277658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 [2024-07-13 07:00:53.293850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 NULL1 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.981 07:00:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:23.981 [2024-07-13 07:00:53.338717] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:23.981 [2024-07-13 07:00:53.338758] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464709 ] 00:13:23.981 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.981 [2024-07-13 07:00:53.370051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:24.544 Attached to nqn.2016-06.io.spdk:cnode1 00:13:24.544 Namespace ID: 1 size: 1GB 00:13:24.544 fused_ordering(0) 00:13:24.544 fused_ordering(1) 00:13:24.544 fused_ordering(2) 00:13:24.544 fused_ordering(3) 00:13:24.544 fused_ordering(4) 00:13:24.544 fused_ordering(5) 00:13:24.544 fused_ordering(6) 00:13:24.544 fused_ordering(7) 00:13:24.544 fused_ordering(8) 00:13:24.544 fused_ordering(9) 00:13:24.544 fused_ordering(10) 00:13:24.544 fused_ordering(11) 00:13:24.544 fused_ordering(12) 00:13:24.544 fused_ordering(13) 00:13:24.544 fused_ordering(14) 00:13:24.544 fused_ordering(15) 00:13:24.544 fused_ordering(16) 00:13:24.544 fused_ordering(17) 00:13:24.544 fused_ordering(18) 00:13:24.544 fused_ordering(19) 00:13:24.544 fused_ordering(20) 00:13:24.544 fused_ordering(21) 00:13:24.544 fused_ordering(22) 00:13:24.544 fused_ordering(23) 00:13:24.544 fused_ordering(24) 00:13:24.544 fused_ordering(25) 00:13:24.544 fused_ordering(26) 00:13:24.544 fused_ordering(27) 00:13:24.544 fused_ordering(28) 00:13:24.544 fused_ordering(29) 00:13:24.544 fused_ordering(30) 00:13:24.544 fused_ordering(31) 00:13:24.544 fused_ordering(32) 00:13:24.544 fused_ordering(33) 00:13:24.544 fused_ordering(34) 00:13:24.544 fused_ordering(35) 00:13:24.544 fused_ordering(36) 00:13:24.544 fused_ordering(37) 00:13:24.544 fused_ordering(38) 00:13:24.544 fused_ordering(39) 00:13:24.544 fused_ordering(40) 00:13:24.544 fused_ordering(41) 00:13:24.544 fused_ordering(42) 00:13:24.544 fused_ordering(43) 00:13:24.544 fused_ordering(44) 00:13:24.544 fused_ordering(45) 00:13:24.544 fused_ordering(46) 00:13:24.544 fused_ordering(47) 00:13:24.544 fused_ordering(48) 00:13:24.544 fused_ordering(49) 00:13:24.544 fused_ordering(50) 00:13:24.544 fused_ordering(51) 00:13:24.544 fused_ordering(52) 00:13:24.544 fused_ordering(53) 00:13:24.544 fused_ordering(54) 00:13:24.544 fused_ordering(55) 00:13:24.544 fused_ordering(56) 00:13:24.544 fused_ordering(57) 00:13:24.544 fused_ordering(58) 00:13:24.544 fused_ordering(59) 00:13:24.544 fused_ordering(60) 00:13:24.544 fused_ordering(61) 00:13:24.544 fused_ordering(62) 00:13:24.544 fused_ordering(63) 00:13:24.544 fused_ordering(64) 00:13:24.544 fused_ordering(65) 00:13:24.544 fused_ordering(66) 00:13:24.544 fused_ordering(67) 00:13:24.544 fused_ordering(68) 00:13:24.544 fused_ordering(69) 00:13:24.544 fused_ordering(70) 00:13:24.544 fused_ordering(71) 00:13:24.544 fused_ordering(72) 00:13:24.544 fused_ordering(73) 00:13:24.544 fused_ordering(74) 00:13:24.544 fused_ordering(75) 00:13:24.544 fused_ordering(76) 00:13:24.544 fused_ordering(77) 00:13:24.544 fused_ordering(78) 00:13:24.544 fused_ordering(79) 00:13:24.544 fused_ordering(80) 00:13:24.544 fused_ordering(81) 00:13:24.544 fused_ordering(82) 00:13:24.544 fused_ordering(83) 00:13:24.544 fused_ordering(84) 00:13:24.544 fused_ordering(85) 00:13:24.544 fused_ordering(86) 00:13:24.544 fused_ordering(87) 00:13:24.544 fused_ordering(88) 00:13:24.544 fused_ordering(89) 00:13:24.544 fused_ordering(90) 00:13:24.544 fused_ordering(91) 00:13:24.544 fused_ordering(92) 00:13:24.544 fused_ordering(93) 00:13:24.544 fused_ordering(94) 00:13:24.544 fused_ordering(95) 00:13:24.544 fused_ordering(96) 00:13:24.544 fused_ordering(97) 00:13:24.544 fused_ordering(98) 00:13:24.544 fused_ordering(99) 00:13:24.544 fused_ordering(100) 00:13:24.544 fused_ordering(101) 00:13:24.544 fused_ordering(102) 00:13:24.544 fused_ordering(103) 00:13:24.544 fused_ordering(104) 00:13:24.544 fused_ordering(105) 00:13:24.544 fused_ordering(106) 00:13:24.544 fused_ordering(107) 00:13:24.544 fused_ordering(108) 00:13:24.544 fused_ordering(109) 00:13:24.544 fused_ordering(110) 00:13:24.544 fused_ordering(111) 00:13:24.544 fused_ordering(112) 00:13:24.544 fused_ordering(113) 00:13:24.544 fused_ordering(114) 00:13:24.544 fused_ordering(115) 00:13:24.544 fused_ordering(116) 00:13:24.544 fused_ordering(117) 00:13:24.544 fused_ordering(118) 00:13:24.544 fused_ordering(119) 00:13:24.544 fused_ordering(120) 00:13:24.544 fused_ordering(121) 00:13:24.544 fused_ordering(122) 00:13:24.544 fused_ordering(123) 00:13:24.544 fused_ordering(124) 00:13:24.544 fused_ordering(125) 00:13:24.544 fused_ordering(126) 00:13:24.544 fused_ordering(127) 00:13:24.544 fused_ordering(128) 00:13:24.544 fused_ordering(129) 00:13:24.544 fused_ordering(130) 00:13:24.544 fused_ordering(131) 00:13:24.544 fused_ordering(132) 00:13:24.544 fused_ordering(133) 00:13:24.544 fused_ordering(134) 00:13:24.544 fused_ordering(135) 00:13:24.544 fused_ordering(136) 00:13:24.544 fused_ordering(137) 00:13:24.544 fused_ordering(138) 00:13:24.544 fused_ordering(139) 00:13:24.544 fused_ordering(140) 00:13:24.544 fused_ordering(141) 00:13:24.544 fused_ordering(142) 00:13:24.544 fused_ordering(143) 00:13:24.544 fused_ordering(144) 00:13:24.544 fused_ordering(145) 00:13:24.544 fused_ordering(146) 00:13:24.544 fused_ordering(147) 00:13:24.544 fused_ordering(148) 00:13:24.544 fused_ordering(149) 00:13:24.544 fused_ordering(150) 00:13:24.544 fused_ordering(151) 00:13:24.544 fused_ordering(152) 00:13:24.544 fused_ordering(153) 00:13:24.544 fused_ordering(154) 00:13:24.544 fused_ordering(155) 00:13:24.544 fused_ordering(156) 00:13:24.544 fused_ordering(157) 00:13:24.544 fused_ordering(158) 00:13:24.544 fused_ordering(159) 00:13:24.544 fused_ordering(160) 00:13:24.544 fused_ordering(161) 00:13:24.544 fused_ordering(162) 00:13:24.544 fused_ordering(163) 00:13:24.544 fused_ordering(164) 00:13:24.544 fused_ordering(165) 00:13:24.544 fused_ordering(166) 00:13:24.544 fused_ordering(167) 00:13:24.544 fused_ordering(168) 00:13:24.544 fused_ordering(169) 00:13:24.544 fused_ordering(170) 00:13:24.544 fused_ordering(171) 00:13:24.544 fused_ordering(172) 00:13:24.544 fused_ordering(173) 00:13:24.544 fused_ordering(174) 00:13:24.544 fused_ordering(175) 00:13:24.544 fused_ordering(176) 00:13:24.544 fused_ordering(177) 00:13:24.544 fused_ordering(178) 00:13:24.544 fused_ordering(179) 00:13:24.544 fused_ordering(180) 00:13:24.544 fused_ordering(181) 00:13:24.544 fused_ordering(182) 00:13:24.544 fused_ordering(183) 00:13:24.544 fused_ordering(184) 00:13:24.544 fused_ordering(185) 00:13:24.544 fused_ordering(186) 00:13:24.544 fused_ordering(187) 00:13:24.544 fused_ordering(188) 00:13:24.545 fused_ordering(189) 00:13:24.545 fused_ordering(190) 00:13:24.545 fused_ordering(191) 00:13:24.545 fused_ordering(192) 00:13:24.545 fused_ordering(193) 00:13:24.545 fused_ordering(194) 00:13:24.545 fused_ordering(195) 00:13:24.545 fused_ordering(196) 00:13:24.545 fused_ordering(197) 00:13:24.545 fused_ordering(198) 00:13:24.545 fused_ordering(199) 00:13:24.545 fused_ordering(200) 00:13:24.545 fused_ordering(201) 00:13:24.545 fused_ordering(202) 00:13:24.545 fused_ordering(203) 00:13:24.545 fused_ordering(204) 00:13:24.545 fused_ordering(205) 00:13:25.108 fused_ordering(206) 00:13:25.108 fused_ordering(207) 00:13:25.108 fused_ordering(208) 00:13:25.108 fused_ordering(209) 00:13:25.108 fused_ordering(210) 00:13:25.108 fused_ordering(211) 00:13:25.108 fused_ordering(212) 00:13:25.108 fused_ordering(213) 00:13:25.108 fused_ordering(214) 00:13:25.108 fused_ordering(215) 00:13:25.108 fused_ordering(216) 00:13:25.108 fused_ordering(217) 00:13:25.108 fused_ordering(218) 00:13:25.108 fused_ordering(219) 00:13:25.108 fused_ordering(220) 00:13:25.108 fused_ordering(221) 00:13:25.108 fused_ordering(222) 00:13:25.108 fused_ordering(223) 00:13:25.108 fused_ordering(224) 00:13:25.108 fused_ordering(225) 00:13:25.108 fused_ordering(226) 00:13:25.108 fused_ordering(227) 00:13:25.108 fused_ordering(228) 00:13:25.108 fused_ordering(229) 00:13:25.108 fused_ordering(230) 00:13:25.108 fused_ordering(231) 00:13:25.108 fused_ordering(232) 00:13:25.108 fused_ordering(233) 00:13:25.108 fused_ordering(234) 00:13:25.108 fused_ordering(235) 00:13:25.108 fused_ordering(236) 00:13:25.108 fused_ordering(237) 00:13:25.108 fused_ordering(238) 00:13:25.108 fused_ordering(239) 00:13:25.108 fused_ordering(240) 00:13:25.108 fused_ordering(241) 00:13:25.108 fused_ordering(242) 00:13:25.108 fused_ordering(243) 00:13:25.108 fused_ordering(244) 00:13:25.108 fused_ordering(245) 00:13:25.108 fused_ordering(246) 00:13:25.108 fused_ordering(247) 00:13:25.108 fused_ordering(248) 00:13:25.108 fused_ordering(249) 00:13:25.108 fused_ordering(250) 00:13:25.108 fused_ordering(251) 00:13:25.108 fused_ordering(252) 00:13:25.108 fused_ordering(253) 00:13:25.108 fused_ordering(254) 00:13:25.108 fused_ordering(255) 00:13:25.108 fused_ordering(256) 00:13:25.108 fused_ordering(257) 00:13:25.108 fused_ordering(258) 00:13:25.108 fused_ordering(259) 00:13:25.108 fused_ordering(260) 00:13:25.108 fused_ordering(261) 00:13:25.108 fused_ordering(262) 00:13:25.108 fused_ordering(263) 00:13:25.108 fused_ordering(264) 00:13:25.108 fused_ordering(265) 00:13:25.108 fused_ordering(266) 00:13:25.108 fused_ordering(267) 00:13:25.108 fused_ordering(268) 00:13:25.108 fused_ordering(269) 00:13:25.108 fused_ordering(270) 00:13:25.108 fused_ordering(271) 00:13:25.108 fused_ordering(272) 00:13:25.108 fused_ordering(273) 00:13:25.108 fused_ordering(274) 00:13:25.108 fused_ordering(275) 00:13:25.108 fused_ordering(276) 00:13:25.108 fused_ordering(277) 00:13:25.108 fused_ordering(278) 00:13:25.108 fused_ordering(279) 00:13:25.108 fused_ordering(280) 00:13:25.108 fused_ordering(281) 00:13:25.108 fused_ordering(282) 00:13:25.108 fused_ordering(283) 00:13:25.108 fused_ordering(284) 00:13:25.108 fused_ordering(285) 00:13:25.108 fused_ordering(286) 00:13:25.108 fused_ordering(287) 00:13:25.108 fused_ordering(288) 00:13:25.108 fused_ordering(289) 00:13:25.108 fused_ordering(290) 00:13:25.108 fused_ordering(291) 00:13:25.108 fused_ordering(292) 00:13:25.108 fused_ordering(293) 00:13:25.108 fused_ordering(294) 00:13:25.108 fused_ordering(295) 00:13:25.108 fused_ordering(296) 00:13:25.108 fused_ordering(297) 00:13:25.108 fused_ordering(298) 00:13:25.108 fused_ordering(299) 00:13:25.108 fused_ordering(300) 00:13:25.108 fused_ordering(301) 00:13:25.108 fused_ordering(302) 00:13:25.108 fused_ordering(303) 00:13:25.108 fused_ordering(304) 00:13:25.108 fused_ordering(305) 00:13:25.108 fused_ordering(306) 00:13:25.108 fused_ordering(307) 00:13:25.108 fused_ordering(308) 00:13:25.108 fused_ordering(309) 00:13:25.108 fused_ordering(310) 00:13:25.108 fused_ordering(311) 00:13:25.108 fused_ordering(312) 00:13:25.108 fused_ordering(313) 00:13:25.108 fused_ordering(314) 00:13:25.108 fused_ordering(315) 00:13:25.108 fused_ordering(316) 00:13:25.108 fused_ordering(317) 00:13:25.108 fused_ordering(318) 00:13:25.108 fused_ordering(319) 00:13:25.108 fused_ordering(320) 00:13:25.108 fused_ordering(321) 00:13:25.108 fused_ordering(322) 00:13:25.108 fused_ordering(323) 00:13:25.108 fused_ordering(324) 00:13:25.108 fused_ordering(325) 00:13:25.108 fused_ordering(326) 00:13:25.108 fused_ordering(327) 00:13:25.108 fused_ordering(328) 00:13:25.108 fused_ordering(329) 00:13:25.108 fused_ordering(330) 00:13:25.108 fused_ordering(331) 00:13:25.108 fused_ordering(332) 00:13:25.108 fused_ordering(333) 00:13:25.108 fused_ordering(334) 00:13:25.108 fused_ordering(335) 00:13:25.108 fused_ordering(336) 00:13:25.108 fused_ordering(337) 00:13:25.108 fused_ordering(338) 00:13:25.108 fused_ordering(339) 00:13:25.108 fused_ordering(340) 00:13:25.108 fused_ordering(341) 00:13:25.108 fused_ordering(342) 00:13:25.108 fused_ordering(343) 00:13:25.108 fused_ordering(344) 00:13:25.108 fused_ordering(345) 00:13:25.108 fused_ordering(346) 00:13:25.108 fused_ordering(347) 00:13:25.108 fused_ordering(348) 00:13:25.108 fused_ordering(349) 00:13:25.108 fused_ordering(350) 00:13:25.108 fused_ordering(351) 00:13:25.108 fused_ordering(352) 00:13:25.108 fused_ordering(353) 00:13:25.108 fused_ordering(354) 00:13:25.108 fused_ordering(355) 00:13:25.108 fused_ordering(356) 00:13:25.108 fused_ordering(357) 00:13:25.108 fused_ordering(358) 00:13:25.108 fused_ordering(359) 00:13:25.108 fused_ordering(360) 00:13:25.108 fused_ordering(361) 00:13:25.108 fused_ordering(362) 00:13:25.108 fused_ordering(363) 00:13:25.108 fused_ordering(364) 00:13:25.108 fused_ordering(365) 00:13:25.108 fused_ordering(366) 00:13:25.108 fused_ordering(367) 00:13:25.108 fused_ordering(368) 00:13:25.108 fused_ordering(369) 00:13:25.108 fused_ordering(370) 00:13:25.108 fused_ordering(371) 00:13:25.108 fused_ordering(372) 00:13:25.108 fused_ordering(373) 00:13:25.108 fused_ordering(374) 00:13:25.108 fused_ordering(375) 00:13:25.108 fused_ordering(376) 00:13:25.108 fused_ordering(377) 00:13:25.108 fused_ordering(378) 00:13:25.108 fused_ordering(379) 00:13:25.108 fused_ordering(380) 00:13:25.108 fused_ordering(381) 00:13:25.108 fused_ordering(382) 00:13:25.108 fused_ordering(383) 00:13:25.108 fused_ordering(384) 00:13:25.108 fused_ordering(385) 00:13:25.108 fused_ordering(386) 00:13:25.108 fused_ordering(387) 00:13:25.108 fused_ordering(388) 00:13:25.108 fused_ordering(389) 00:13:25.108 fused_ordering(390) 00:13:25.108 fused_ordering(391) 00:13:25.108 fused_ordering(392) 00:13:25.108 fused_ordering(393) 00:13:25.108 fused_ordering(394) 00:13:25.108 fused_ordering(395) 00:13:25.108 fused_ordering(396) 00:13:25.108 fused_ordering(397) 00:13:25.108 fused_ordering(398) 00:13:25.108 fused_ordering(399) 00:13:25.108 fused_ordering(400) 00:13:25.108 fused_ordering(401) 00:13:25.108 fused_ordering(402) 00:13:25.108 fused_ordering(403) 00:13:25.108 fused_ordering(404) 00:13:25.108 fused_ordering(405) 00:13:25.108 fused_ordering(406) 00:13:25.108 fused_ordering(407) 00:13:25.108 fused_ordering(408) 00:13:25.108 fused_ordering(409) 00:13:25.108 fused_ordering(410) 00:13:25.366 fused_ordering(411) 00:13:25.366 fused_ordering(412) 00:13:25.366 fused_ordering(413) 00:13:25.366 fused_ordering(414) 00:13:25.366 fused_ordering(415) 00:13:25.366 fused_ordering(416) 00:13:25.366 fused_ordering(417) 00:13:25.366 fused_ordering(418) 00:13:25.366 fused_ordering(419) 00:13:25.366 fused_ordering(420) 00:13:25.366 fused_ordering(421) 00:13:25.366 fused_ordering(422) 00:13:25.366 fused_ordering(423) 00:13:25.366 fused_ordering(424) 00:13:25.366 fused_ordering(425) 00:13:25.366 fused_ordering(426) 00:13:25.366 fused_ordering(427) 00:13:25.366 fused_ordering(428) 00:13:25.366 fused_ordering(429) 00:13:25.366 fused_ordering(430) 00:13:25.366 fused_ordering(431) 00:13:25.366 fused_ordering(432) 00:13:25.366 fused_ordering(433) 00:13:25.366 fused_ordering(434) 00:13:25.366 fused_ordering(435) 00:13:25.366 fused_ordering(436) 00:13:25.366 fused_ordering(437) 00:13:25.366 fused_ordering(438) 00:13:25.366 fused_ordering(439) 00:13:25.366 fused_ordering(440) 00:13:25.366 fused_ordering(441) 00:13:25.366 fused_ordering(442) 00:13:25.366 fused_ordering(443) 00:13:25.366 fused_ordering(444) 00:13:25.366 fused_ordering(445) 00:13:25.366 fused_ordering(446) 00:13:25.366 fused_ordering(447) 00:13:25.366 fused_ordering(448) 00:13:25.366 fused_ordering(449) 00:13:25.366 fused_ordering(450) 00:13:25.366 fused_ordering(451) 00:13:25.366 fused_ordering(452) 00:13:25.366 fused_ordering(453) 00:13:25.366 fused_ordering(454) 00:13:25.366 fused_ordering(455) 00:13:25.366 fused_ordering(456) 00:13:25.366 fused_ordering(457) 00:13:25.366 fused_ordering(458) 00:13:25.366 fused_ordering(459) 00:13:25.366 fused_ordering(460) 00:13:25.366 fused_ordering(461) 00:13:25.366 fused_ordering(462) 00:13:25.366 fused_ordering(463) 00:13:25.366 fused_ordering(464) 00:13:25.366 fused_ordering(465) 00:13:25.366 fused_ordering(466) 00:13:25.366 fused_ordering(467) 00:13:25.366 fused_ordering(468) 00:13:25.366 fused_ordering(469) 00:13:25.366 fused_ordering(470) 00:13:25.366 fused_ordering(471) 00:13:25.366 fused_ordering(472) 00:13:25.366 fused_ordering(473) 00:13:25.366 fused_ordering(474) 00:13:25.366 fused_ordering(475) 00:13:25.366 fused_ordering(476) 00:13:25.366 fused_ordering(477) 00:13:25.366 fused_ordering(478) 00:13:25.366 fused_ordering(479) 00:13:25.366 fused_ordering(480) 00:13:25.366 fused_ordering(481) 00:13:25.366 fused_ordering(482) 00:13:25.366 fused_ordering(483) 00:13:25.366 fused_ordering(484) 00:13:25.366 fused_ordering(485) 00:13:25.366 fused_ordering(486) 00:13:25.366 fused_ordering(487) 00:13:25.366 fused_ordering(488) 00:13:25.366 fused_ordering(489) 00:13:25.366 fused_ordering(490) 00:13:25.366 fused_ordering(491) 00:13:25.366 fused_ordering(492) 00:13:25.366 fused_ordering(493) 00:13:25.366 fused_ordering(494) 00:13:25.366 fused_ordering(495) 00:13:25.366 fused_ordering(496) 00:13:25.366 fused_ordering(497) 00:13:25.366 fused_ordering(498) 00:13:25.366 fused_ordering(499) 00:13:25.366 fused_ordering(500) 00:13:25.366 fused_ordering(501) 00:13:25.366 fused_ordering(502) 00:13:25.366 fused_ordering(503) 00:13:25.366 fused_ordering(504) 00:13:25.366 fused_ordering(505) 00:13:25.366 fused_ordering(506) 00:13:25.366 fused_ordering(507) 00:13:25.366 fused_ordering(508) 00:13:25.366 fused_ordering(509) 00:13:25.366 fused_ordering(510) 00:13:25.366 fused_ordering(511) 00:13:25.366 fused_ordering(512) 00:13:25.366 fused_ordering(513) 00:13:25.366 fused_ordering(514) 00:13:25.366 fused_ordering(515) 00:13:25.366 fused_ordering(516) 00:13:25.366 fused_ordering(517) 00:13:25.366 fused_ordering(518) 00:13:25.366 fused_ordering(519) 00:13:25.366 fused_ordering(520) 00:13:25.366 fused_ordering(521) 00:13:25.366 fused_ordering(522) 00:13:25.366 fused_ordering(523) 00:13:25.366 fused_ordering(524) 00:13:25.366 fused_ordering(525) 00:13:25.366 fused_ordering(526) 00:13:25.366 fused_ordering(527) 00:13:25.366 fused_ordering(528) 00:13:25.366 fused_ordering(529) 00:13:25.366 fused_ordering(530) 00:13:25.366 fused_ordering(531) 00:13:25.366 fused_ordering(532) 00:13:25.366 fused_ordering(533) 00:13:25.366 fused_ordering(534) 00:13:25.366 fused_ordering(535) 00:13:25.366 fused_ordering(536) 00:13:25.366 fused_ordering(537) 00:13:25.366 fused_ordering(538) 00:13:25.366 fused_ordering(539) 00:13:25.366 fused_ordering(540) 00:13:25.366 fused_ordering(541) 00:13:25.366 fused_ordering(542) 00:13:25.366 fused_ordering(543) 00:13:25.366 fused_ordering(544) 00:13:25.366 fused_ordering(545) 00:13:25.366 fused_ordering(546) 00:13:25.366 fused_ordering(547) 00:13:25.366 fused_ordering(548) 00:13:25.366 fused_ordering(549) 00:13:25.366 fused_ordering(550) 00:13:25.366 fused_ordering(551) 00:13:25.366 fused_ordering(552) 00:13:25.366 fused_ordering(553) 00:13:25.366 fused_ordering(554) 00:13:25.366 fused_ordering(555) 00:13:25.366 fused_ordering(556) 00:13:25.366 fused_ordering(557) 00:13:25.366 fused_ordering(558) 00:13:25.366 fused_ordering(559) 00:13:25.366 fused_ordering(560) 00:13:25.366 fused_ordering(561) 00:13:25.366 fused_ordering(562) 00:13:25.366 fused_ordering(563) 00:13:25.366 fused_ordering(564) 00:13:25.366 fused_ordering(565) 00:13:25.367 fused_ordering(566) 00:13:25.367 fused_ordering(567) 00:13:25.367 fused_ordering(568) 00:13:25.367 fused_ordering(569) 00:13:25.367 fused_ordering(570) 00:13:25.367 fused_ordering(571) 00:13:25.367 fused_ordering(572) 00:13:25.367 fused_ordering(573) 00:13:25.367 fused_ordering(574) 00:13:25.367 fused_ordering(575) 00:13:25.367 fused_ordering(576) 00:13:25.367 fused_ordering(577) 00:13:25.367 fused_ordering(578) 00:13:25.367 fused_ordering(579) 00:13:25.367 fused_ordering(580) 00:13:25.367 fused_ordering(581) 00:13:25.367 fused_ordering(582) 00:13:25.367 fused_ordering(583) 00:13:25.367 fused_ordering(584) 00:13:25.367 fused_ordering(585) 00:13:25.367 fused_ordering(586) 00:13:25.367 fused_ordering(587) 00:13:25.367 fused_ordering(588) 00:13:25.367 fused_ordering(589) 00:13:25.367 fused_ordering(590) 00:13:25.367 fused_ordering(591) 00:13:25.367 fused_ordering(592) 00:13:25.367 fused_ordering(593) 00:13:25.367 fused_ordering(594) 00:13:25.367 fused_ordering(595) 00:13:25.367 fused_ordering(596) 00:13:25.367 fused_ordering(597) 00:13:25.367 fused_ordering(598) 00:13:25.367 fused_ordering(599) 00:13:25.367 fused_ordering(600) 00:13:25.367 fused_ordering(601) 00:13:25.367 fused_ordering(602) 00:13:25.367 fused_ordering(603) 00:13:25.367 fused_ordering(604) 00:13:25.367 fused_ordering(605) 00:13:25.367 fused_ordering(606) 00:13:25.367 fused_ordering(607) 00:13:25.367 fused_ordering(608) 00:13:25.367 fused_ordering(609) 00:13:25.367 fused_ordering(610) 00:13:25.367 fused_ordering(611) 00:13:25.367 fused_ordering(612) 00:13:25.367 fused_ordering(613) 00:13:25.367 fused_ordering(614) 00:13:25.367 fused_ordering(615) 00:13:26.298 fused_ordering(616) 00:13:26.298 fused_ordering(617) 00:13:26.298 fused_ordering(618) 00:13:26.298 fused_ordering(619) 00:13:26.298 fused_ordering(620) 00:13:26.298 fused_ordering(621) 00:13:26.298 fused_ordering(622) 00:13:26.298 fused_ordering(623) 00:13:26.298 fused_ordering(624) 00:13:26.298 fused_ordering(625) 00:13:26.298 fused_ordering(626) 00:13:26.298 fused_ordering(627) 00:13:26.298 fused_ordering(628) 00:13:26.298 fused_ordering(629) 00:13:26.298 fused_ordering(630) 00:13:26.298 fused_ordering(631) 00:13:26.298 fused_ordering(632) 00:13:26.298 fused_ordering(633) 00:13:26.298 fused_ordering(634) 00:13:26.298 fused_ordering(635) 00:13:26.298 fused_ordering(636) 00:13:26.298 fused_ordering(637) 00:13:26.298 fused_ordering(638) 00:13:26.298 fused_ordering(639) 00:13:26.298 fused_ordering(640) 00:13:26.298 fused_ordering(641) 00:13:26.298 fused_ordering(642) 00:13:26.298 fused_ordering(643) 00:13:26.298 fused_ordering(644) 00:13:26.298 fused_ordering(645) 00:13:26.298 fused_ordering(646) 00:13:26.298 fused_ordering(647) 00:13:26.298 fused_ordering(648) 00:13:26.298 fused_ordering(649) 00:13:26.298 fused_ordering(650) 00:13:26.298 fused_ordering(651) 00:13:26.298 fused_ordering(652) 00:13:26.298 fused_ordering(653) 00:13:26.298 fused_ordering(654) 00:13:26.298 fused_ordering(655) 00:13:26.298 fused_ordering(656) 00:13:26.298 fused_ordering(657) 00:13:26.298 fused_ordering(658) 00:13:26.298 fused_ordering(659) 00:13:26.298 fused_ordering(660) 00:13:26.298 fused_ordering(661) 00:13:26.298 fused_ordering(662) 00:13:26.298 fused_ordering(663) 00:13:26.298 fused_ordering(664) 00:13:26.298 fused_ordering(665) 00:13:26.298 fused_ordering(666) 00:13:26.298 fused_ordering(667) 00:13:26.298 fused_ordering(668) 00:13:26.298 fused_ordering(669) 00:13:26.298 fused_ordering(670) 00:13:26.298 fused_ordering(671) 00:13:26.298 fused_ordering(672) 00:13:26.298 fused_ordering(673) 00:13:26.298 fused_ordering(674) 00:13:26.298 fused_ordering(675) 00:13:26.298 fused_ordering(676) 00:13:26.298 fused_ordering(677) 00:13:26.298 fused_ordering(678) 00:13:26.298 fused_ordering(679) 00:13:26.298 fused_ordering(680) 00:13:26.298 fused_ordering(681) 00:13:26.298 fused_ordering(682) 00:13:26.298 fused_ordering(683) 00:13:26.298 fused_ordering(684) 00:13:26.298 fused_ordering(685) 00:13:26.298 fused_ordering(686) 00:13:26.298 fused_ordering(687) 00:13:26.298 fused_ordering(688) 00:13:26.298 fused_ordering(689) 00:13:26.298 fused_ordering(690) 00:13:26.298 fused_ordering(691) 00:13:26.298 fused_ordering(692) 00:13:26.298 fused_ordering(693) 00:13:26.298 fused_ordering(694) 00:13:26.298 fused_ordering(695) 00:13:26.298 fused_ordering(696) 00:13:26.298 fused_ordering(697) 00:13:26.298 fused_ordering(698) 00:13:26.298 fused_ordering(699) 00:13:26.298 fused_ordering(700) 00:13:26.298 fused_ordering(701) 00:13:26.298 fused_ordering(702) 00:13:26.298 fused_ordering(703) 00:13:26.298 fused_ordering(704) 00:13:26.298 fused_ordering(705) 00:13:26.298 fused_ordering(706) 00:13:26.298 fused_ordering(707) 00:13:26.298 fused_ordering(708) 00:13:26.298 fused_ordering(709) 00:13:26.298 fused_ordering(710) 00:13:26.298 fused_ordering(711) 00:13:26.298 fused_ordering(712) 00:13:26.298 fused_ordering(713) 00:13:26.298 fused_ordering(714) 00:13:26.298 fused_ordering(715) 00:13:26.298 fused_ordering(716) 00:13:26.298 fused_ordering(717) 00:13:26.298 fused_ordering(718) 00:13:26.298 fused_ordering(719) 00:13:26.298 fused_ordering(720) 00:13:26.298 fused_ordering(721) 00:13:26.298 fused_ordering(722) 00:13:26.298 fused_ordering(723) 00:13:26.298 fused_ordering(724) 00:13:26.298 fused_ordering(725) 00:13:26.298 fused_ordering(726) 00:13:26.298 fused_ordering(727) 00:13:26.298 fused_ordering(728) 00:13:26.298 fused_ordering(729) 00:13:26.298 fused_ordering(730) 00:13:26.298 fused_ordering(731) 00:13:26.298 fused_ordering(732) 00:13:26.298 fused_ordering(733) 00:13:26.298 fused_ordering(734) 00:13:26.298 fused_ordering(735) 00:13:26.298 fused_ordering(736) 00:13:26.298 fused_ordering(737) 00:13:26.298 fused_ordering(738) 00:13:26.298 fused_ordering(739) 00:13:26.298 fused_ordering(740) 00:13:26.298 fused_ordering(741) 00:13:26.298 fused_ordering(742) 00:13:26.298 fused_ordering(743) 00:13:26.298 fused_ordering(744) 00:13:26.298 fused_ordering(745) 00:13:26.298 fused_ordering(746) 00:13:26.298 fused_ordering(747) 00:13:26.298 fused_ordering(748) 00:13:26.298 fused_ordering(749) 00:13:26.298 fused_ordering(750) 00:13:26.298 fused_ordering(751) 00:13:26.298 fused_ordering(752) 00:13:26.298 fused_ordering(753) 00:13:26.298 fused_ordering(754) 00:13:26.298 fused_ordering(755) 00:13:26.298 fused_ordering(756) 00:13:26.298 fused_ordering(757) 00:13:26.298 fused_ordering(758) 00:13:26.298 fused_ordering(759) 00:13:26.298 fused_ordering(760) 00:13:26.298 fused_ordering(761) 00:13:26.298 fused_ordering(762) 00:13:26.298 fused_ordering(763) 00:13:26.298 fused_ordering(764) 00:13:26.298 fused_ordering(765) 00:13:26.298 fused_ordering(766) 00:13:26.298 fused_ordering(767) 00:13:26.298 fused_ordering(768) 00:13:26.298 fused_ordering(769) 00:13:26.298 fused_ordering(770) 00:13:26.298 fused_ordering(771) 00:13:26.298 fused_ordering(772) 00:13:26.298 fused_ordering(773) 00:13:26.298 fused_ordering(774) 00:13:26.298 fused_ordering(775) 00:13:26.298 fused_ordering(776) 00:13:26.298 fused_ordering(777) 00:13:26.298 fused_ordering(778) 00:13:26.298 fused_ordering(779) 00:13:26.298 fused_ordering(780) 00:13:26.298 fused_ordering(781) 00:13:26.298 fused_ordering(782) 00:13:26.298 fused_ordering(783) 00:13:26.298 fused_ordering(784) 00:13:26.298 fused_ordering(785) 00:13:26.298 fused_ordering(786) 00:13:26.298 fused_ordering(787) 00:13:26.298 fused_ordering(788) 00:13:26.298 fused_ordering(789) 00:13:26.298 fused_ordering(790) 00:13:26.298 fused_ordering(791) 00:13:26.298 fused_ordering(792) 00:13:26.298 fused_ordering(793) 00:13:26.298 fused_ordering(794) 00:13:26.298 fused_ordering(795) 00:13:26.298 fused_ordering(796) 00:13:26.298 fused_ordering(797) 00:13:26.298 fused_ordering(798) 00:13:26.298 fused_ordering(799) 00:13:26.298 fused_ordering(800) 00:13:26.298 fused_ordering(801) 00:13:26.298 fused_ordering(802) 00:13:26.298 fused_ordering(803) 00:13:26.298 fused_ordering(804) 00:13:26.298 fused_ordering(805) 00:13:26.298 fused_ordering(806) 00:13:26.298 fused_ordering(807) 00:13:26.298 fused_ordering(808) 00:13:26.298 fused_ordering(809) 00:13:26.298 fused_ordering(810) 00:13:26.298 fused_ordering(811) 00:13:26.298 fused_ordering(812) 00:13:26.298 fused_ordering(813) 00:13:26.298 fused_ordering(814) 00:13:26.298 fused_ordering(815) 00:13:26.298 fused_ordering(816) 00:13:26.298 fused_ordering(817) 00:13:26.298 fused_ordering(818) 00:13:26.298 fused_ordering(819) 00:13:26.298 fused_ordering(820) 00:13:26.864 fused_ordering(821) 00:13:26.864 fused_ordering(822) 00:13:26.864 fused_ordering(823) 00:13:26.864 fused_ordering(824) 00:13:26.864 fused_ordering(825) 00:13:26.864 fused_ordering(826) 00:13:26.864 fused_ordering(827) 00:13:26.864 fused_ordering(828) 00:13:26.864 fused_ordering(829) 00:13:26.864 fused_ordering(830) 00:13:26.864 fused_ordering(831) 00:13:26.864 fused_ordering(832) 00:13:26.864 fused_ordering(833) 00:13:26.864 fused_ordering(834) 00:13:26.864 fused_ordering(835) 00:13:26.864 fused_ordering(836) 00:13:26.864 fused_ordering(837) 00:13:26.864 fused_ordering(838) 00:13:26.864 fused_ordering(839) 00:13:26.864 fused_ordering(840) 00:13:26.864 fused_ordering(841) 00:13:26.864 fused_ordering(842) 00:13:26.864 fused_ordering(843) 00:13:26.864 fused_ordering(844) 00:13:26.864 fused_ordering(845) 00:13:26.864 fused_ordering(846) 00:13:26.864 fused_ordering(847) 00:13:26.864 fused_ordering(848) 00:13:26.864 fused_ordering(849) 00:13:26.864 fused_ordering(850) 00:13:26.864 fused_ordering(851) 00:13:26.864 fused_ordering(852) 00:13:26.864 fused_ordering(853) 00:13:26.864 fused_ordering(854) 00:13:26.864 fused_ordering(855) 00:13:26.864 fused_ordering(856) 00:13:26.864 fused_ordering(857) 00:13:26.864 fused_ordering(858) 00:13:26.864 fused_ordering(859) 00:13:26.864 fused_ordering(860) 00:13:26.864 fused_ordering(861) 00:13:26.864 fused_ordering(862) 00:13:26.864 fused_ordering(863) 00:13:26.864 fused_ordering(864) 00:13:26.864 fused_ordering(865) 00:13:26.864 fused_ordering(866) 00:13:26.864 fused_ordering(867) 00:13:26.864 fused_ordering(868) 00:13:26.864 fused_ordering(869) 00:13:26.864 fused_ordering(870) 00:13:26.864 fused_ordering(871) 00:13:26.864 fused_ordering(872) 00:13:26.864 fused_ordering(873) 00:13:26.864 fused_ordering(874) 00:13:26.864 fused_ordering(875) 00:13:26.864 fused_ordering(876) 00:13:26.864 fused_ordering(877) 00:13:26.864 fused_ordering(878) 00:13:26.864 fused_ordering(879) 00:13:26.864 fused_ordering(880) 00:13:26.864 fused_ordering(881) 00:13:26.864 fused_ordering(882) 00:13:26.864 fused_ordering(883) 00:13:26.864 fused_ordering(884) 00:13:26.864 fused_ordering(885) 00:13:26.864 fused_ordering(886) 00:13:26.864 fused_ordering(887) 00:13:26.864 fused_ordering(888) 00:13:26.864 fused_ordering(889) 00:13:26.864 fused_ordering(890) 00:13:26.864 fused_ordering(891) 00:13:26.864 fused_ordering(892) 00:13:26.864 fused_ordering(893) 00:13:26.864 fused_ordering(894) 00:13:26.864 fused_ordering(895) 00:13:26.864 fused_ordering(896) 00:13:26.864 fused_ordering(897) 00:13:26.864 fused_ordering(898) 00:13:26.864 fused_ordering(899) 00:13:26.864 fused_ordering(900) 00:13:26.864 fused_ordering(901) 00:13:26.864 fused_ordering(902) 00:13:26.864 fused_ordering(903) 00:13:26.864 fused_ordering(904) 00:13:26.864 fused_ordering(905) 00:13:26.864 fused_ordering(906) 00:13:26.864 fused_ordering(907) 00:13:26.864 fused_ordering(908) 00:13:26.864 fused_ordering(909) 00:13:26.864 fused_ordering(910) 00:13:26.864 fused_ordering(911) 00:13:26.864 fused_ordering(912) 00:13:26.864 fused_ordering(913) 00:13:26.864 fused_ordering(914) 00:13:26.864 fused_ordering(915) 00:13:26.864 fused_ordering(916) 00:13:26.864 fused_ordering(917) 00:13:26.864 fused_ordering(918) 00:13:26.864 fused_ordering(919) 00:13:26.864 fused_ordering(920) 00:13:26.864 fused_ordering(921) 00:13:26.864 fused_ordering(922) 00:13:26.864 fused_ordering(923) 00:13:26.864 fused_ordering(924) 00:13:26.864 fused_ordering(925) 00:13:26.864 fused_ordering(926) 00:13:26.864 fused_ordering(927) 00:13:26.864 fused_ordering(928) 00:13:26.864 fused_ordering(929) 00:13:26.864 fused_ordering(930) 00:13:26.864 fused_ordering(931) 00:13:26.864 fused_ordering(932) 00:13:26.864 fused_ordering(933) 00:13:26.864 fused_ordering(934) 00:13:26.864 fused_ordering(935) 00:13:26.864 fused_ordering(936) 00:13:26.864 fused_ordering(937) 00:13:26.864 fused_ordering(938) 00:13:26.864 fused_ordering(939) 00:13:26.864 fused_ordering(940) 00:13:26.864 fused_ordering(941) 00:13:26.864 fused_ordering(942) 00:13:26.864 fused_ordering(943) 00:13:26.864 fused_ordering(944) 00:13:26.864 fused_ordering(945) 00:13:26.864 fused_ordering(946) 00:13:26.864 fused_ordering(947) 00:13:26.864 fused_ordering(948) 00:13:26.864 fused_ordering(949) 00:13:26.864 fused_ordering(950) 00:13:26.864 fused_ordering(951) 00:13:26.864 fused_ordering(952) 00:13:26.864 fused_ordering(953) 00:13:26.864 fused_ordering(954) 00:13:26.864 fused_ordering(955) 00:13:26.864 fused_ordering(956) 00:13:26.864 fused_ordering(957) 00:13:26.864 fused_ordering(958) 00:13:26.864 fused_ordering(959) 00:13:26.864 fused_ordering(960) 00:13:26.864 fused_ordering(961) 00:13:26.864 fused_ordering(962) 00:13:26.864 fused_ordering(963) 00:13:26.864 fused_ordering(964) 00:13:26.864 fused_ordering(965) 00:13:26.864 fused_ordering(966) 00:13:26.864 fused_ordering(967) 00:13:26.864 fused_ordering(968) 00:13:26.864 fused_ordering(969) 00:13:26.864 fused_ordering(970) 00:13:26.864 fused_ordering(971) 00:13:26.864 fused_ordering(972) 00:13:26.864 fused_ordering(973) 00:13:26.864 fused_ordering(974) 00:13:26.864 fused_ordering(975) 00:13:26.864 fused_ordering(976) 00:13:26.864 fused_ordering(977) 00:13:26.864 fused_ordering(978) 00:13:26.864 fused_ordering(979) 00:13:26.864 fused_ordering(980) 00:13:26.864 fused_ordering(981) 00:13:26.864 fused_ordering(982) 00:13:26.864 fused_ordering(983) 00:13:26.864 fused_ordering(984) 00:13:26.864 fused_ordering(985) 00:13:26.864 fused_ordering(986) 00:13:26.864 fused_ordering(987) 00:13:26.864 fused_ordering(988) 00:13:26.864 fused_ordering(989) 00:13:26.864 fused_ordering(990) 00:13:26.864 fused_ordering(991) 00:13:26.864 fused_ordering(992) 00:13:26.864 fused_ordering(993) 00:13:26.864 fused_ordering(994) 00:13:26.864 fused_ordering(995) 00:13:26.864 fused_ordering(996) 00:13:26.864 fused_ordering(997) 00:13:26.864 fused_ordering(998) 00:13:26.864 fused_ordering(999) 00:13:26.864 fused_ordering(1000) 00:13:26.864 fused_ordering(1001) 00:13:26.864 fused_ordering(1002) 00:13:26.864 fused_ordering(1003) 00:13:26.864 fused_ordering(1004) 00:13:26.864 fused_ordering(1005) 00:13:26.864 fused_ordering(1006) 00:13:26.864 fused_ordering(1007) 00:13:26.864 fused_ordering(1008) 00:13:26.864 fused_ordering(1009) 00:13:26.864 fused_ordering(1010) 00:13:26.864 fused_ordering(1011) 00:13:26.864 fused_ordering(1012) 00:13:26.864 fused_ordering(1013) 00:13:26.864 fused_ordering(1014) 00:13:26.864 fused_ordering(1015) 00:13:26.864 fused_ordering(1016) 00:13:26.864 fused_ordering(1017) 00:13:26.864 fused_ordering(1018) 00:13:26.864 fused_ordering(1019) 00:13:26.864 fused_ordering(1020) 00:13:26.864 fused_ordering(1021) 00:13:26.864 fused_ordering(1022) 00:13:26.864 fused_ordering(1023) 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:26.864 rmmod nvme_tcp 00:13:26.864 rmmod nvme_fabrics 00:13:26.864 rmmod nvme_keyring 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1464685 ']' 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1464685 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1464685 ']' 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1464685 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1464685 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1464685' 00:13:26.864 killing process with pid 1464685 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1464685 00:13:26.864 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1464685 00:13:27.123 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:27.123 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:27.123 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:27.123 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.123 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:27.123 07:00:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.123 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.123 07:00:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.653 07:00:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:29.653 00:13:29.653 real 0m7.677s 00:13:29.653 user 0m5.276s 00:13:29.653 sys 0m3.582s 00:13:29.653 07:00:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.653 07:00:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:29.653 ************************************ 00:13:29.653 END TEST nvmf_fused_ordering 00:13:29.653 ************************************ 00:13:29.653 07:00:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:29.653 07:00:58 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:29.653 07:00:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:29.653 07:00:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.653 07:00:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:29.653 ************************************ 00:13:29.653 START TEST nvmf_delete_subsystem 00:13:29.653 ************************************ 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:29.653 * Looking for test storage... 00:13:29.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:29.653 07:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.549 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:31.549 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:31.550 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:31.550 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:31.550 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:31.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:13:31.550 00:13:31.550 --- 10.0.0.2 ping statistics --- 00:13:31.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.550 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:13:31.550 00:13:31.550 --- 10.0.0.1 ping statistics --- 00:13:31.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.550 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1467026 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1467026 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1467026 ']' 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.550 07:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 [2024-07-13 07:01:00.821843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:31.550 [2024-07-13 07:01:00.821946] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.550 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.550 [2024-07-13 07:01:00.861017] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:31.550 [2024-07-13 07:01:00.891858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:31.550 [2024-07-13 07:01:00.986365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.550 [2024-07-13 07:01:00.986433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.550 [2024-07-13 07:01:00.986450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.550 [2024-07-13 07:01:00.986464] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.550 [2024-07-13 07:01:00.986476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.550 [2024-07-13 07:01:00.986565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.550 [2024-07-13 07:01:00.986570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.808 [2024-07-13 07:01:01.139667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.808 [2024-07-13 07:01:01.155893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.808 NULL1 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.808 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.808 Delay0 00:13:31.809 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.809 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.809 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.809 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.809 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.809 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1467048 00:13:31.809 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:31.809 07:01:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:31.809 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.809 [2024-07-13 07:01:01.230600] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:34.331 07:01:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.331 07:01:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.332 07:01:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 [2024-07-13 07:01:03.482758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f55ec00cfe0 is same with the state(5) to be set 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 starting I/O failed: -6 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 [2024-07-13 07:01:03.483533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80f20 is same with the state(5) to be set 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Write completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.332 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Write completed with error (sct=0, sc=8) 00:13:34.333 Write completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Write completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Write completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Read completed with error (sct=0, sc=8) 00:13:34.333 Write completed with error (sct=0, sc=8) 00:13:34.333 Write completed with error (sct=0, sc=8) 00:13:35.266 [2024-07-13 07:01:04.448814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9eb40 is same with the state(5) to be set 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 [2024-07-13 07:01:04.484680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81100 is same with the state(5) to be set 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 [2024-07-13 07:01:04.485146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f55ec00d2f0 is same with the state(5) to be set 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.266 Write completed with error (sct=0, sc=8) 00:13:35.266 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Write completed with error (sct=0, sc=8) 00:13:35.267 Write completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 [2024-07-13 07:01:04.485366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f55ec000c00 is same with the state(5) to be set 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Write completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Write completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Write completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Write completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Write completed with error (sct=0, sc=8) 00:13:35.267 Write completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Write completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 Read completed with error (sct=0, sc=8) 00:13:35.267 [2024-07-13 07:01:04.485595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80d40 is same with the state(5) to be set 00:13:35.267 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.267 Initializing NVMe Controllers 00:13:35.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:35.267 Controller IO queue size 128, less than required. 00:13:35.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:35.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:35.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:35.267 Initialization complete. Launching workers. 00:13:35.267 ======================================================== 00:13:35.267 Latency(us) 00:13:35.267 Device Information : IOPS MiB/s Average min max 00:13:35.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.26 0.08 978506.58 333.17 2003715.30 00:13:35.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.66 0.09 881749.66 725.54 1012021.60 00:13:35.267 ======================================================== 00:13:35.267 Total : 339.92 0.17 928221.23 333.17 2003715.30 00:13:35.267 00:13:35.267 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:35.267 [2024-07-13 07:01:04.486996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9eb40 (9): B 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1467048 00:13:35.267 ad file descriptor 00:13:35.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:35.267 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1467048 00:13:35.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1467048) - No such process 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1467048 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1467048 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1467048 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.830 07:01:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:35.830 [2024-07-13 07:01:05.007755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1467574 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1467574 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:35.830 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:35.830 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.830 [2024-07-13 07:01:05.072543] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:36.087 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.087 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1467574 00:13:36.087 07:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.650 07:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.650 07:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1467574 00:13:36.650 07:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:37.215 07:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:37.215 07:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1467574 00:13:37.215 07:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:37.812 07:01:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:37.812 07:01:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1467574 00:13:37.812 07:01:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:38.374 07:01:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:38.374 07:01:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1467574 00:13:38.374 07:01:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:38.632 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:38.632 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1467574 00:13:38.632 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:38.889 Initializing NVMe Controllers 00:13:38.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:38.889 Controller IO queue size 128, less than required. 00:13:38.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:38.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:38.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:38.889 Initialization complete. Launching workers. 00:13:38.889 ======================================================== 00:13:38.889 Latency(us) 00:13:38.889 Device Information : IOPS MiB/s Average min max 00:13:38.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005584.99 1000193.24 1044592.66 00:13:38.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004276.77 1000224.71 1010872.81 00:13:38.889 ======================================================== 00:13:38.889 Total : 256.00 0.12 1004930.88 1000193.24 1044592.66 00:13:38.889 00:13:39.146 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:39.146 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1467574 00:13:39.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1467574) - No such process 00:13:39.146 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1467574 00:13:39.146 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:39.146 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:39.147 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.147 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:39.147 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.147 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:39.147 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.147 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.147 rmmod nvme_tcp 00:13:39.147 rmmod nvme_fabrics 00:13:39.147 rmmod nvme_keyring 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1467026 ']' 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1467026 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1467026 ']' 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1467026 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467026 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467026' 00:13:39.404 killing process with pid 1467026 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1467026 00:13:39.404 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1467026 00:13:39.662 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.662 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:39.662 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:39.662 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.662 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.662 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.662 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.662 07:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.560 07:01:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:41.560 00:13:41.560 real 0m12.269s 00:13:41.560 user 0m28.018s 00:13:41.560 sys 0m3.021s 00:13:41.560 07:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.560 07:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.560 ************************************ 00:13:41.560 END TEST nvmf_delete_subsystem 00:13:41.560 ************************************ 00:13:41.560 07:01:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:41.560 07:01:10 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:41.560 07:01:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:41.560 07:01:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.560 07:01:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.560 ************************************ 00:13:41.560 START TEST nvmf_ns_masking 00:13:41.560 ************************************ 00:13:41.560 07:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:41.560 * Looking for test storage... 00:13:41.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.560 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fab16a3a-c780-4cd0-872b-903b802e611b 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7162cc7a-8d28-45c1-9fc2-a8ac0c594894 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e1b8e5b8-d061-4a4d-a87f-958e1c33f629 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.819 07:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:43.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:43.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:43.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:43.716 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.716 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.717 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.717 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.717 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.717 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.717 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.717 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.717 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.717 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.717 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.717 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:13:43.974 00:13:43.974 --- 10.0.0.2 ping statistics --- 00:13:43.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.974 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:13:43.974 00:13:43.974 --- 10.0.0.1 ping statistics --- 00:13:43.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.974 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1469922 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1469922 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1469922 ']' 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.974 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.974 [2024-07-13 07:01:13.311308] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:43.974 [2024-07-13 07:01:13.311394] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.974 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.974 [2024-07-13 07:01:13.348122] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:43.974 [2024-07-13 07:01:13.375259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.233 [2024-07-13 07:01:13.459340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.233 [2024-07-13 07:01:13.459391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.233 [2024-07-13 07:01:13.459412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.233 [2024-07-13 07:01:13.459429] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.233 [2024-07-13 07:01:13.459439] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.233 [2024-07-13 07:01:13.459464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.233 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.233 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:44.233 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.233 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.233 07:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:44.233 07:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.233 07:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:44.491 [2024-07-13 07:01:13.865200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.491 07:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:44.491 07:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:44.491 07:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:44.747 Malloc1 00:13:44.747 07:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:45.003 Malloc2 00:13:45.261 07:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:45.519 07:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:45.776 07:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.033 [2024-07-13 07:01:15.255956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.033 07:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:46.033 07:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e1b8e5b8-d061-4a4d-a87f-958e1c33f629 -a 10.0.0.2 -s 4420 -i 4 00:13:46.033 07:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.033 07:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:46.033 07:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.033 07:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:46.033 07:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:48.556 [ 0]:0x1 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd4b08c8bb4d4f5d8a3f03e419d0406b 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd4b08c8bb4d4f5d8a3f03e419d0406b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:48.556 [ 0]:0x1 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd4b08c8bb4d4f5d8a3f03e419d0406b 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd4b08c8bb4d4f5d8a3f03e419d0406b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:48.556 [ 1]:0x2 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:48.556 07:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:48.813 07:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48cf2976f8674d5dafc0e84658344847 00:13:48.813 07:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48cf2976f8674d5dafc0e84658344847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:48.813 07:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:48.813 07:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.813 07:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.070 07:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:49.328 07:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:49.328 07:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e1b8e5b8-d061-4a4d-a87f-958e1c33f629 -a 10.0.0.2 -s 4420 -i 4 00:13:49.328 07:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:49.328 07:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:49.328 07:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.328 07:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:49.328 07:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:49.328 07:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:51.854 [ 0]:0x2 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48cf2976f8674d5dafc0e84658344847 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48cf2976f8674d5dafc0e84658344847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.854 07:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:51.854 [ 0]:0x1 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd4b08c8bb4d4f5d8a3f03e419d0406b 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd4b08c8bb4d4f5d8a3f03e419d0406b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:51.854 [ 1]:0x2 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48cf2976f8674d5dafc0e84658344847 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48cf2976f8674d5dafc0e84658344847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.854 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:52.418 [ 0]:0x2 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48cf2976f8674d5dafc0e84658344847 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48cf2976f8674d5dafc0e84658344847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:52.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.418 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:52.674 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:52.674 07:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e1b8e5b8-d061-4a4d-a87f-958e1c33f629 -a 10.0.0.2 -s 4420 -i 4 00:13:52.930 07:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:52.930 07:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:52.930 07:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.930 07:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:52.930 07:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:52.930 07:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:54.872 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:54.872 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:54.872 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.872 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:54.872 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.872 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:54.872 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:54.872 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.129 [ 0]:0x1 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd4b08c8bb4d4f5d8a3f03e419d0406b 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd4b08c8bb4d4f5d8a3f03e419d0406b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:55.129 [ 1]:0x2 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48cf2976f8674d5dafc0e84658344847 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48cf2976f8674d5dafc0e84658344847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.129 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.387 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:55.645 [ 0]:0x2 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48cf2976f8674d5dafc0e84658344847 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48cf2976f8674d5dafc0e84658344847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:55.645 07:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:55.904 [2024-07-13 07:01:25.109883] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:55.904 request: 00:13:55.904 { 00:13:55.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.904 "nsid": 2, 00:13:55.904 "host": "nqn.2016-06.io.spdk:host1", 00:13:55.904 "method": "nvmf_ns_remove_host", 00:13:55.904 "req_id": 1 00:13:55.904 } 00:13:55.904 Got JSON-RPC error response 00:13:55.904 response: 00:13:55.904 { 00:13:55.904 "code": -32602, 00:13:55.904 "message": "Invalid parameters" 00:13:55.904 } 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:55.904 [ 0]:0x2 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48cf2976f8674d5dafc0e84658344847 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48cf2976f8674d5dafc0e84658344847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1471539 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1471539 /var/tmp/host.sock 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1471539 ']' 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:55.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:55.904 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.905 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:56.163 [2024-07-13 07:01:25.403775] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:56.163 [2024-07-13 07:01:25.403880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471539 ] 00:13:56.163 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.163 [2024-07-13 07:01:25.437192] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:56.163 [2024-07-13 07:01:25.469989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.163 [2024-07-13 07:01:25.563069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.419 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.420 07:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:56.420 07:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.676 07:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.934 07:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fab16a3a-c780-4cd0-872b-903b802e611b 00:13:56.934 07:01:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:56.934 07:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FAB16A3AC7804CD0872B903B802E611B -i 00:13:57.191 07:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7162cc7a-8d28-45c1-9fc2-a8ac0c594894 00:13:57.191 07:01:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:57.191 07:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7162CC7A8D2845C19FC2A8AC0C594894 -i 00:13:57.448 07:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:57.705 07:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:57.962 07:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:57.962 07:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:58.526 nvme0n1 00:13:58.526 07:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:58.526 07:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:58.782 nvme1n2 00:13:58.782 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:58.782 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:58.782 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:58.782 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:58.782 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:59.039 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:59.039 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:59.039 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:59.039 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:59.297 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fab16a3a-c780-4cd0-872b-903b802e611b == \f\a\b\1\6\a\3\a\-\c\7\8\0\-\4\c\d\0\-\8\7\2\b\-\9\0\3\b\8\0\2\e\6\1\1\b ]] 00:13:59.297 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:59.297 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:59.297 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7162cc7a-8d28-45c1-9fc2-a8ac0c594894 == \7\1\6\2\c\c\7\a\-\8\d\2\8\-\4\5\c\1\-\9\f\c\2\-\a\8\a\c\0\c\5\9\4\8\9\4 ]] 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1471539 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1471539 ']' 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1471539 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1471539 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1471539' 00:13:59.556 killing process with pid 1471539 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1471539 00:13:59.556 07:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1471539 00:13:59.814 07:01:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.380 rmmod nvme_tcp 00:14:00.380 rmmod nvme_fabrics 00:14:00.380 rmmod nvme_keyring 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1469922 ']' 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1469922 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1469922 ']' 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1469922 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469922 00:14:00.380 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:00.381 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:00.381 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469922' 00:14:00.381 killing process with pid 1469922 00:14:00.381 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1469922 00:14:00.381 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1469922 00:14:00.638 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.638 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.638 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.638 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.638 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.638 07:01:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.638 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.638 07:01:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.539 07:01:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.539 00:14:02.539 real 0m21.019s 00:14:02.539 user 0m27.128s 00:14:02.539 sys 0m4.150s 00:14:02.539 07:01:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:02.539 07:01:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:02.539 ************************************ 00:14:02.539 END TEST nvmf_ns_masking 00:14:02.539 ************************************ 00:14:02.798 07:01:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:02.798 07:01:31 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:02.798 07:01:31 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:02.798 07:01:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:02.798 07:01:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.798 07:01:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.798 ************************************ 00:14:02.798 START TEST nvmf_nvme_cli 00:14:02.798 ************************************ 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:02.798 * Looking for test storage... 00:14:02.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.798 07:01:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:04.697 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:04.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:04.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.697 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:04.698 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.698 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:14:04.956 00:14:04.956 --- 10.0.0.2 ping statistics --- 00:14:04.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.956 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:14:04.956 00:14:04.956 --- 10.0.0.1 ping statistics --- 00:14:04.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.956 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1473963 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1473963 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1473963 ']' 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.956 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:04.956 [2024-07-13 07:01:34.252309] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:04.956 [2024-07-13 07:01:34.252383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.956 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.956 [2024-07-13 07:01:34.294747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:04.956 [2024-07-13 07:01:34.326059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.213 [2024-07-13 07:01:34.427491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.213 [2024-07-13 07:01:34.427551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.213 [2024-07-13 07:01:34.427568] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.213 [2024-07-13 07:01:34.427582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.213 [2024-07-13 07:01:34.427594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.213 [2024-07-13 07:01:34.427675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.213 [2024-07-13 07:01:34.427741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.213 [2024-07-13 07:01:34.427766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.213 [2024-07-13 07:01:34.427769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.213 [2024-07-13 07:01:34.585737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.213 Malloc0 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.213 Malloc1 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.213 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.470 [2024-07-13 07:01:34.671898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:05.470 00:14:05.470 Discovery Log Number of Records 2, Generation counter 2 00:14:05.470 =====Discovery Log Entry 0====== 00:14:05.470 trtype: tcp 00:14:05.470 adrfam: ipv4 00:14:05.470 subtype: current discovery subsystem 00:14:05.470 treq: not required 00:14:05.470 portid: 0 00:14:05.470 trsvcid: 4420 00:14:05.470 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:05.470 traddr: 10.0.0.2 00:14:05.470 eflags: explicit discovery connections, duplicate discovery information 00:14:05.470 sectype: none 00:14:05.470 =====Discovery Log Entry 1====== 00:14:05.470 trtype: tcp 00:14:05.470 adrfam: ipv4 00:14:05.470 subtype: nvme subsystem 00:14:05.470 treq: not required 00:14:05.470 portid: 0 00:14:05.470 trsvcid: 4420 00:14:05.470 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:05.470 traddr: 10.0.0.2 00:14:05.470 eflags: none 00:14:05.470 sectype: none 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:05.470 07:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.033 07:01:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:06.033 07:01:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:06.033 07:01:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.033 07:01:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:06.033 07:01:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:06.033 07:01:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:08.556 /dev/nvme0n1 ]] 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:08.556 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:08.557 rmmod nvme_tcp 00:14:08.557 rmmod nvme_fabrics 00:14:08.557 rmmod nvme_keyring 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1473963 ']' 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1473963 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1473963 ']' 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1473963 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1473963 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1473963' 00:14:08.557 killing process with pid 1473963 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1473963 00:14:08.557 07:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1473963 00:14:08.853 07:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:08.853 07:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:08.853 07:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:08.853 07:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:08.853 07:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:08.853 07:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.853 07:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.853 07:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.784 07:01:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.784 00:14:10.784 real 0m8.056s 00:14:10.784 user 0m14.865s 00:14:10.784 sys 0m2.162s 00:14:10.784 07:01:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.784 07:01:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.784 ************************************ 00:14:10.784 END TEST nvmf_nvme_cli 00:14:10.784 ************************************ 00:14:10.784 07:01:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:10.784 07:01:40 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:10.784 07:01:40 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:10.784 07:01:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:10.784 07:01:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.784 07:01:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.784 ************************************ 00:14:10.784 START TEST nvmf_vfio_user 00:14:10.784 ************************************ 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:10.784 * Looking for test storage... 00:14:10.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1474834 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1474834' 00:14:10.784 Process pid: 1474834 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1474834 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1474834 ']' 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.784 07:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:11.043 [2024-07-13 07:01:40.258064] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:11.043 [2024-07-13 07:01:40.258144] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.043 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.043 [2024-07-13 07:01:40.292056] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:11.043 [2024-07-13 07:01:40.322826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.043 [2024-07-13 07:01:40.416542] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.043 [2024-07-13 07:01:40.416606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.043 [2024-07-13 07:01:40.416633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.043 [2024-07-13 07:01:40.416653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.043 [2024-07-13 07:01:40.416666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.043 [2024-07-13 07:01:40.416748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.043 [2024-07-13 07:01:40.416815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.043 [2024-07-13 07:01:40.416894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.043 [2024-07-13 07:01:40.416897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.300 07:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.300 07:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:11.300 07:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:12.232 07:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:12.489 07:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:12.489 07:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:12.489 07:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:12.489 07:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:12.489 07:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:12.746 Malloc1 00:14:12.746 07:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:13.310 07:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:13.310 07:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:13.578 07:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:13.578 07:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:13.578 07:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:13.840 Malloc2 00:14:13.840 07:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:14.097 07:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:14.354 07:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:14.613 07:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:14.613 07:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:14.613 07:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:14.613 07:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:14.613 07:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:14.613 07:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:14.613 [2024-07-13 07:01:43.973758] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:14.613 [2024-07-13 07:01:43.973794] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475256 ] 00:14:14.613 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.613 [2024-07-13 07:01:43.991509] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:14.613 [2024-07-13 07:01:44.009042] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:14.613 [2024-07-13 07:01:44.015309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:14.613 [2024-07-13 07:01:44.015340] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f81dfd64000 00:14:14.613 [2024-07-13 07:01:44.016304] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:14.613 [2024-07-13 07:01:44.017302] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:14.613 [2024-07-13 07:01:44.018309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:14.613 [2024-07-13 07:01:44.019315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:14.614 [2024-07-13 07:01:44.020322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:14.614 [2024-07-13 07:01:44.021323] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:14.614 [2024-07-13 07:01:44.022328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:14.614 [2024-07-13 07:01:44.023331] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:14.614 [2024-07-13 07:01:44.024339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:14.614 [2024-07-13 07:01:44.024357] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f81deb26000 00:14:14.614 [2024-07-13 07:01:44.025472] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:14.614 [2024-07-13 07:01:44.044161] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:14.614 [2024-07-13 07:01:44.044195] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:14.614 [2024-07-13 07:01:44.046474] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:14.614 [2024-07-13 07:01:44.046521] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:14.614 [2024-07-13 07:01:44.046609] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:14.614 [2024-07-13 07:01:44.046637] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:14.614 [2024-07-13 07:01:44.046647] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:14.614 [2024-07-13 07:01:44.047466] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:14.614 [2024-07-13 07:01:44.047490] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:14.614 [2024-07-13 07:01:44.047503] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:14.614 [2024-07-13 07:01:44.048468] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:14.614 [2024-07-13 07:01:44.048486] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:14.614 [2024-07-13 07:01:44.048499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:14.614 [2024-07-13 07:01:44.049474] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:14.614 [2024-07-13 07:01:44.049491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:14.614 [2024-07-13 07:01:44.050477] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:14.614 [2024-07-13 07:01:44.050495] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:14.614 [2024-07-13 07:01:44.050503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:14.614 [2024-07-13 07:01:44.050514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:14.614 [2024-07-13 07:01:44.050623] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:14.614 [2024-07-13 07:01:44.050631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:14.614 [2024-07-13 07:01:44.050639] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:14.614 [2024-07-13 07:01:44.051485] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:14.614 [2024-07-13 07:01:44.052489] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:14.614 [2024-07-13 07:01:44.053499] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:14.614 [2024-07-13 07:01:44.054494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:14.614 [2024-07-13 07:01:44.054595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:14.614 [2024-07-13 07:01:44.055514] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:14.614 [2024-07-13 07:01:44.055531] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:14.614 [2024-07-13 07:01:44.055539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.055562] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:14.614 [2024-07-13 07:01:44.055575] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.055601] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:14.614 [2024-07-13 07:01:44.055611] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:14.614 [2024-07-13 07:01:44.055629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:14.614 [2024-07-13 07:01:44.055687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:14.614 [2024-07-13 07:01:44.055702] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:14.614 [2024-07-13 07:01:44.055713] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:14.614 [2024-07-13 07:01:44.055721] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:14.614 [2024-07-13 07:01:44.055729] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:14.614 [2024-07-13 07:01:44.055737] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:14.614 [2024-07-13 07:01:44.055745] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:14.614 [2024-07-13 07:01:44.055752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.055764] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.055778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:14.614 [2024-07-13 07:01:44.055796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:14.614 [2024-07-13 07:01:44.055815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.614 [2024-07-13 07:01:44.055828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.614 [2024-07-13 07:01:44.055840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.614 [2024-07-13 07:01:44.055872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.614 [2024-07-13 07:01:44.055882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.055897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.055927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:14.614 [2024-07-13 07:01:44.055939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:14.614 [2024-07-13 07:01:44.055950] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:14.614 [2024-07-13 07:01:44.055959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.055970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.055980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.055996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:14.614 [2024-07-13 07:01:44.056009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:14.614 [2024-07-13 07:01:44.056073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.056087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:14.614 [2024-07-13 07:01:44.056100] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:14.614 [2024-07-13 07:01:44.056109] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:14.614 [2024-07-13 07:01:44.056118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:14.614 [2024-07-13 07:01:44.056135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:14.614 [2024-07-13 07:01:44.056162] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:14.615 [2024-07-13 07:01:44.056181] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056210] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056223] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:14.615 [2024-07-13 07:01:44.056231] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:14.615 [2024-07-13 07:01:44.056240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:14.615 [2024-07-13 07:01:44.056278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:14.615 [2024-07-13 07:01:44.056300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056325] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:14.615 [2024-07-13 07:01:44.056333] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:14.615 [2024-07-13 07:01:44.056342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:14.615 [2024-07-13 07:01:44.056355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:14.615 [2024-07-13 07:01:44.056368] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056378] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056429] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:14.615 [2024-07-13 07:01:44.056436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:14.615 [2024-07-13 07:01:44.056445] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:14.615 [2024-07-13 07:01:44.056468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:14.615 [2024-07-13 07:01:44.056485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:14.615 [2024-07-13 07:01:44.056504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:14.615 [2024-07-13 07:01:44.056515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:14.615 [2024-07-13 07:01:44.056531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:14.615 [2024-07-13 07:01:44.056542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:14.615 [2024-07-13 07:01:44.056557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:14.615 [2024-07-13 07:01:44.056567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:14.615 [2024-07-13 07:01:44.056588] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:14.615 [2024-07-13 07:01:44.056598] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:14.615 [2024-07-13 07:01:44.056604] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:14.615 [2024-07-13 07:01:44.056610] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:14.615 [2024-07-13 07:01:44.056619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:14.615 [2024-07-13 07:01:44.056629] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:14.615 [2024-07-13 07:01:44.056637] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:14.615 [2024-07-13 07:01:44.056645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:14.615 [2024-07-13 07:01:44.056656] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:14.615 [2024-07-13 07:01:44.056663] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:14.615 [2024-07-13 07:01:44.056671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:14.615 [2024-07-13 07:01:44.056683] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:14.615 [2024-07-13 07:01:44.056690] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:14.615 [2024-07-13 07:01:44.056698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:14.615 [2024-07-13 07:01:44.056712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:14.615 [2024-07-13 07:01:44.056732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:14.615 [2024-07-13 07:01:44.056748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:14.615 [2024-07-13 07:01:44.056759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:14.615 ===================================================== 00:14:14.615 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:14.615 ===================================================== 00:14:14.615 Controller Capabilities/Features 00:14:14.615 ================================ 00:14:14.615 Vendor ID: 4e58 00:14:14.615 Subsystem Vendor ID: 4e58 00:14:14.615 Serial Number: SPDK1 00:14:14.615 Model Number: SPDK bdev Controller 00:14:14.615 Firmware Version: 24.09 00:14:14.615 Recommended Arb Burst: 6 00:14:14.615 IEEE OUI Identifier: 8d 6b 50 00:14:14.615 Multi-path I/O 00:14:14.615 May have multiple subsystem ports: Yes 00:14:14.615 May have multiple controllers: Yes 00:14:14.615 Associated with SR-IOV VF: No 00:14:14.615 Max Data Transfer Size: 131072 00:14:14.615 Max Number of Namespaces: 32 00:14:14.615 Max Number of I/O Queues: 127 00:14:14.615 NVMe Specification Version (VS): 1.3 00:14:14.615 NVMe Specification Version (Identify): 1.3 00:14:14.615 Maximum Queue Entries: 256 00:14:14.615 Contiguous Queues Required: Yes 00:14:14.615 Arbitration Mechanisms Supported 00:14:14.615 Weighted Round Robin: Not Supported 00:14:14.615 Vendor Specific: Not Supported 00:14:14.615 Reset Timeout: 15000 ms 00:14:14.615 Doorbell Stride: 4 bytes 00:14:14.615 NVM Subsystem Reset: Not Supported 00:14:14.615 Command Sets Supported 00:14:14.615 NVM Command Set: Supported 00:14:14.615 Boot Partition: Not Supported 00:14:14.615 Memory Page Size Minimum: 4096 bytes 00:14:14.615 Memory Page Size Maximum: 4096 bytes 00:14:14.615 Persistent Memory Region: Not Supported 00:14:14.615 Optional Asynchronous Events Supported 00:14:14.615 Namespace Attribute Notices: Supported 00:14:14.615 Firmware Activation Notices: Not Supported 00:14:14.615 ANA Change Notices: Not Supported 00:14:14.615 PLE Aggregate Log Change Notices: Not Supported 00:14:14.615 LBA Status Info Alert Notices: Not Supported 00:14:14.615 EGE Aggregate Log Change Notices: Not Supported 00:14:14.615 Normal NVM Subsystem Shutdown event: Not Supported 00:14:14.615 Zone Descriptor Change Notices: Not Supported 00:14:14.615 Discovery Log Change Notices: Not Supported 00:14:14.615 Controller Attributes 00:14:14.615 128-bit Host Identifier: Supported 00:14:14.615 Non-Operational Permissive Mode: Not Supported 00:14:14.615 NVM Sets: Not Supported 00:14:14.615 Read Recovery Levels: Not Supported 00:14:14.615 Endurance Groups: Not Supported 00:14:14.615 Predictable Latency Mode: Not Supported 00:14:14.615 Traffic Based Keep ALive: Not Supported 00:14:14.615 Namespace Granularity: Not Supported 00:14:14.615 SQ Associations: Not Supported 00:14:14.615 UUID List: Not Supported 00:14:14.615 Multi-Domain Subsystem: Not Supported 00:14:14.615 Fixed Capacity Management: Not Supported 00:14:14.615 Variable Capacity Management: Not Supported 00:14:14.615 Delete Endurance Group: Not Supported 00:14:14.615 Delete NVM Set: Not Supported 00:14:14.615 Extended LBA Formats Supported: Not Supported 00:14:14.615 Flexible Data Placement Supported: Not Supported 00:14:14.615 00:14:14.615 Controller Memory Buffer Support 00:14:14.615 ================================ 00:14:14.615 Supported: No 00:14:14.615 00:14:14.615 Persistent Memory Region Support 00:14:14.615 ================================ 00:14:14.615 Supported: No 00:14:14.615 00:14:14.615 Admin Command Set Attributes 00:14:14.615 ============================ 00:14:14.615 Security Send/Receive: Not Supported 00:14:14.615 Format NVM: Not Supported 00:14:14.615 Firmware Activate/Download: Not Supported 00:14:14.615 Namespace Management: Not Supported 00:14:14.615 Device Self-Test: Not Supported 00:14:14.615 Directives: Not Supported 00:14:14.615 NVMe-MI: Not Supported 00:14:14.615 Virtualization Management: Not Supported 00:14:14.615 Doorbell Buffer Config: Not Supported 00:14:14.615 Get LBA Status Capability: Not Supported 00:14:14.615 Command & Feature Lockdown Capability: Not Supported 00:14:14.615 Abort Command Limit: 4 00:14:14.615 Async Event Request Limit: 4 00:14:14.615 Number of Firmware Slots: N/A 00:14:14.615 Firmware Slot 1 Read-Only: N/A 00:14:14.615 Firmware Activation Without Reset: N/A 00:14:14.616 Multiple Update Detection Support: N/A 00:14:14.616 Firmware Update Granularity: No Information Provided 00:14:14.616 Per-Namespace SMART Log: No 00:14:14.616 Asymmetric Namespace Access Log Page: Not Supported 00:14:14.616 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:14.616 Command Effects Log Page: Supported 00:14:14.616 Get Log Page Extended Data: Supported 00:14:14.616 Telemetry Log Pages: Not Supported 00:14:14.616 Persistent Event Log Pages: Not Supported 00:14:14.616 Supported Log Pages Log Page: May Support 00:14:14.616 Commands Supported & Effects Log Page: Not Supported 00:14:14.616 Feature Identifiers & Effects Log Page:May Support 00:14:14.616 NVMe-MI Commands & Effects Log Page: May Support 00:14:14.616 Data Area 4 for Telemetry Log: Not Supported 00:14:14.616 Error Log Page Entries Supported: 128 00:14:14.616 Keep Alive: Supported 00:14:14.616 Keep Alive Granularity: 10000 ms 00:14:14.616 00:14:14.616 NVM Command Set Attributes 00:14:14.616 ========================== 00:14:14.616 Submission Queue Entry Size 00:14:14.616 Max: 64 00:14:14.616 Min: 64 00:14:14.616 Completion Queue Entry Size 00:14:14.616 Max: 16 00:14:14.616 Min: 16 00:14:14.616 Number of Namespaces: 32 00:14:14.616 Compare Command: Supported 00:14:14.616 Write Uncorrectable Command: Not Supported 00:14:14.616 Dataset Management Command: Supported 00:14:14.616 Write Zeroes Command: Supported 00:14:14.616 Set Features Save Field: Not Supported 00:14:14.616 Reservations: Not Supported 00:14:14.616 Timestamp: Not Supported 00:14:14.616 Copy: Supported 00:14:14.616 Volatile Write Cache: Present 00:14:14.616 Atomic Write Unit (Normal): 1 00:14:14.616 Atomic Write Unit (PFail): 1 00:14:14.616 Atomic Compare & Write Unit: 1 00:14:14.616 Fused Compare & Write: Supported 00:14:14.616 Scatter-Gather List 00:14:14.616 SGL Command Set: Supported (Dword aligned) 00:14:14.616 SGL Keyed: Not Supported 00:14:14.616 SGL Bit Bucket Descriptor: Not Supported 00:14:14.616 SGL Metadata Pointer: Not Supported 00:14:14.616 Oversized SGL: Not Supported 00:14:14.616 SGL Metadata Address: Not Supported 00:14:14.616 SGL Offset: Not Supported 00:14:14.616 Transport SGL Data Block: Not Supported 00:14:14.616 Replay Protected Memory Block: Not Supported 00:14:14.616 00:14:14.616 Firmware Slot Information 00:14:14.616 ========================= 00:14:14.616 Active slot: 1 00:14:14.616 Slot 1 Firmware Revision: 24.09 00:14:14.616 00:14:14.616 00:14:14.616 Commands Supported and Effects 00:14:14.616 ============================== 00:14:14.616 Admin Commands 00:14:14.616 -------------- 00:14:14.616 Get Log Page (02h): Supported 00:14:14.616 Identify (06h): Supported 00:14:14.616 Abort (08h): Supported 00:14:14.616 Set Features (09h): Supported 00:14:14.616 Get Features (0Ah): Supported 00:14:14.616 Asynchronous Event Request (0Ch): Supported 00:14:14.616 Keep Alive (18h): Supported 00:14:14.616 I/O Commands 00:14:14.616 ------------ 00:14:14.616 Flush (00h): Supported LBA-Change 00:14:14.616 Write (01h): Supported LBA-Change 00:14:14.616 Read (02h): Supported 00:14:14.616 Compare (05h): Supported 00:14:14.616 Write Zeroes (08h): Supported LBA-Change 00:14:14.616 Dataset Management (09h): Supported LBA-Change 00:14:14.616 Copy (19h): Supported LBA-Change 00:14:14.616 00:14:14.616 Error Log 00:14:14.616 ========= 00:14:14.616 00:14:14.616 Arbitration 00:14:14.616 =========== 00:14:14.616 Arbitration Burst: 1 00:14:14.616 00:14:14.616 Power Management 00:14:14.616 ================ 00:14:14.616 Number of Power States: 1 00:14:14.616 Current Power State: Power State #0 00:14:14.616 Power State #0: 00:14:14.616 Max Power: 0.00 W 00:14:14.616 Non-Operational State: Operational 00:14:14.616 Entry Latency: Not Reported 00:14:14.616 Exit Latency: Not Reported 00:14:14.616 Relative Read Throughput: 0 00:14:14.616 Relative Read Latency: 0 00:14:14.616 Relative Write Throughput: 0 00:14:14.616 Relative Write Latency: 0 00:14:14.616 Idle Power: Not Reported 00:14:14.616 Active Power: Not Reported 00:14:14.616 Non-Operational Permissive Mode: Not Supported 00:14:14.616 00:14:14.616 Health Information 00:14:14.616 ================== 00:14:14.616 Critical Warnings: 00:14:14.616 Available Spare Space: OK 00:14:14.616 Temperature: OK 00:14:14.616 Device Reliability: OK 00:14:14.616 Read Only: No 00:14:14.616 Volatile Memory Backup: OK 00:14:14.616 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:14.616 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:14.616 Available Spare: 0% 00:14:14.616 Available Sp[2024-07-13 07:01:44.056906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:14.616 [2024-07-13 07:01:44.056923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:14.616 [2024-07-13 07:01:44.056967] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:14.616 [2024-07-13 07:01:44.056985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.616 [2024-07-13 07:01:44.056996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.616 [2024-07-13 07:01:44.057005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.616 [2024-07-13 07:01:44.057015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.616 [2024-07-13 07:01:44.057524] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:14.616 [2024-07-13 07:01:44.057544] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:14.616 [2024-07-13 07:01:44.058521] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:14.616 [2024-07-13 07:01:44.058606] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:14.616 [2024-07-13 07:01:44.058620] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:14.616 [2024-07-13 07:01:44.059531] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:14.616 [2024-07-13 07:01:44.059552] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:14.616 [2024-07-13 07:01:44.059603] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:14.616 [2024-07-13 07:01:44.063880] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:14.874 are Threshold: 0% 00:14:14.874 Life Percentage Used: 0% 00:14:14.874 Data Units Read: 0 00:14:14.874 Data Units Written: 0 00:14:14.874 Host Read Commands: 0 00:14:14.874 Host Write Commands: 0 00:14:14.874 Controller Busy Time: 0 minutes 00:14:14.874 Power Cycles: 0 00:14:14.874 Power On Hours: 0 hours 00:14:14.874 Unsafe Shutdowns: 0 00:14:14.874 Unrecoverable Media Errors: 0 00:14:14.874 Lifetime Error Log Entries: 0 00:14:14.874 Warning Temperature Time: 0 minutes 00:14:14.874 Critical Temperature Time: 0 minutes 00:14:14.874 00:14:14.874 Number of Queues 00:14:14.874 ================ 00:14:14.874 Number of I/O Submission Queues: 127 00:14:14.874 Number of I/O Completion Queues: 127 00:14:14.874 00:14:14.874 Active Namespaces 00:14:14.874 ================= 00:14:14.874 Namespace ID:1 00:14:14.874 Error Recovery Timeout: Unlimited 00:14:14.874 Command Set Identifier: NVM (00h) 00:14:14.874 Deallocate: Supported 00:14:14.874 Deallocated/Unwritten Error: Not Supported 00:14:14.874 Deallocated Read Value: Unknown 00:14:14.874 Deallocate in Write Zeroes: Not Supported 00:14:14.874 Deallocated Guard Field: 0xFFFF 00:14:14.874 Flush: Supported 00:14:14.874 Reservation: Supported 00:14:14.874 Namespace Sharing Capabilities: Multiple Controllers 00:14:14.874 Size (in LBAs): 131072 (0GiB) 00:14:14.874 Capacity (in LBAs): 131072 (0GiB) 00:14:14.874 Utilization (in LBAs): 131072 (0GiB) 00:14:14.874 NGUID: F85661A8155F426B9D4B108DCC119F6F 00:14:14.874 UUID: f85661a8-155f-426b-9d4b-108dcc119f6f 00:14:14.874 Thin Provisioning: Not Supported 00:14:14.874 Per-NS Atomic Units: Yes 00:14:14.874 Atomic Boundary Size (Normal): 0 00:14:14.874 Atomic Boundary Size (PFail): 0 00:14:14.874 Atomic Boundary Offset: 0 00:14:14.874 Maximum Single Source Range Length: 65535 00:14:14.874 Maximum Copy Length: 65535 00:14:14.874 Maximum Source Range Count: 1 00:14:14.874 NGUID/EUI64 Never Reused: No 00:14:14.874 Namespace Write Protected: No 00:14:14.874 Number of LBA Formats: 1 00:14:14.874 Current LBA Format: LBA Format #00 00:14:14.874 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:14.874 00:14:14.874 07:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:14.874 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.874 [2024-07-13 07:01:44.285700] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:20.130 Initializing NVMe Controllers 00:14:20.130 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:20.130 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:20.130 Initialization complete. Launching workers. 00:14:20.130 ======================================================== 00:14:20.130 Latency(us) 00:14:20.130 Device Information : IOPS MiB/s Average min max 00:14:20.130 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34681.06 135.47 3690.09 1177.47 10697.89 00:14:20.130 ======================================================== 00:14:20.130 Total : 34681.06 135.47 3690.09 1177.47 10697.89 00:14:20.130 00:14:20.130 [2024-07-13 07:01:49.311358] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:20.130 07:01:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:20.130 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.130 [2024-07-13 07:01:49.541498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:25.387 Initializing NVMe Controllers 00:14:25.387 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:25.387 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:25.387 Initialization complete. Launching workers. 00:14:25.387 ======================================================== 00:14:25.387 Latency(us) 00:14:25.387 Device Information : IOPS MiB/s Average min max 00:14:25.387 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15963.16 62.36 8023.74 6011.48 15981.25 00:14:25.387 ======================================================== 00:14:25.387 Total : 15963.16 62.36 8023.74 6011.48 15981.25 00:14:25.387 00:14:25.387 [2024-07-13 07:01:54.583954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:25.387 07:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:25.387 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.387 [2024-07-13 07:01:54.798047] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:30.650 [2024-07-13 07:01:59.880297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:30.650 Initializing NVMe Controllers 00:14:30.650 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:30.650 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:30.650 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:30.650 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:30.650 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:30.650 Initialization complete. Launching workers. 00:14:30.650 Starting thread on core 2 00:14:30.650 Starting thread on core 3 00:14:30.650 Starting thread on core 1 00:14:30.650 07:01:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:30.650 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.908 [2024-07-13 07:02:00.190382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.221 [2024-07-13 07:02:03.455116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.221 Initializing NVMe Controllers 00:14:34.221 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:34.221 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:34.221 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:34.221 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:34.221 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:34.221 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:34.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:34.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:34.221 Initialization complete. Launching workers. 00:14:34.221 Starting thread on core 1 with urgent priority queue 00:14:34.221 Starting thread on core 2 with urgent priority queue 00:14:34.221 Starting thread on core 3 with urgent priority queue 00:14:34.221 Starting thread on core 0 with urgent priority queue 00:14:34.221 SPDK bdev Controller (SPDK1 ) core 0: 4734.33 IO/s 21.12 secs/100000 ios 00:14:34.221 SPDK bdev Controller (SPDK1 ) core 1: 5214.33 IO/s 19.18 secs/100000 ios 00:14:34.221 SPDK bdev Controller (SPDK1 ) core 2: 5424.00 IO/s 18.44 secs/100000 ios 00:14:34.221 SPDK bdev Controller (SPDK1 ) core 3: 5180.33 IO/s 19.30 secs/100000 ios 00:14:34.221 ======================================================== 00:14:34.221 00:14:34.221 07:02:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:34.221 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.477 [2024-07-13 07:02:03.747398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.477 Initializing NVMe Controllers 00:14:34.477 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:34.477 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:34.477 Namespace ID: 1 size: 0GB 00:14:34.477 Initialization complete. 00:14:34.477 INFO: using host memory buffer for IO 00:14:34.477 Hello world! 00:14:34.477 [2024-07-13 07:02:03.781956] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.477 07:02:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:34.477 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.734 [2024-07-13 07:02:04.078401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:35.665 Initializing NVMe Controllers 00:14:35.665 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:35.665 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:35.665 Initialization complete. Launching workers. 00:14:35.665 submit (in ns) avg, min, max = 7795.0, 3511.1, 4015230.0 00:14:35.665 complete (in ns) avg, min, max = 24800.3, 2083.3, 4018983.3 00:14:35.665 00:14:35.665 Submit histogram 00:14:35.665 ================ 00:14:35.665 Range in us Cumulative Count 00:14:35.665 3.508 - 3.532: 0.3564% ( 48) 00:14:35.665 3.532 - 3.556: 1.3366% ( 132) 00:14:35.665 3.556 - 3.579: 4.1657% ( 381) 00:14:35.665 3.579 - 3.603: 9.4527% ( 712) 00:14:35.665 3.603 - 3.627: 18.3931% ( 1204) 00:14:35.665 3.627 - 3.650: 27.8087% ( 1268) 00:14:35.665 3.650 - 3.674: 36.1996% ( 1130) 00:14:35.665 3.674 - 3.698: 42.7861% ( 887) 00:14:35.665 3.698 - 3.721: 48.7711% ( 806) 00:14:35.665 3.721 - 3.745: 53.8947% ( 690) 00:14:35.665 3.745 - 3.769: 58.1867% ( 578) 00:14:35.665 3.769 - 3.793: 62.2633% ( 549) 00:14:35.665 3.793 - 3.816: 65.7310% ( 467) 00:14:35.665 3.816 - 3.840: 69.5775% ( 518) 00:14:35.665 3.840 - 3.864: 73.6170% ( 544) 00:14:35.665 3.864 - 3.887: 77.6565% ( 544) 00:14:35.665 3.887 - 3.911: 81.3693% ( 500) 00:14:35.665 3.911 - 3.935: 84.4509% ( 415) 00:14:35.665 3.935 - 3.959: 86.4781% ( 273) 00:14:35.665 3.959 - 3.982: 88.2676% ( 241) 00:14:35.665 3.982 - 4.006: 89.9829% ( 231) 00:14:35.665 4.006 - 4.030: 91.1265% ( 154) 00:14:35.665 4.030 - 4.053: 92.1289% ( 135) 00:14:35.665 4.053 - 4.077: 93.1611% ( 139) 00:14:35.665 4.077 - 4.101: 94.1561% ( 134) 00:14:35.665 4.101 - 4.124: 94.7724% ( 83) 00:14:35.665 4.124 - 4.148: 95.2922% ( 70) 00:14:35.665 4.148 - 4.172: 95.8194% ( 71) 00:14:35.665 4.172 - 4.196: 96.1239% ( 41) 00:14:35.665 4.196 - 4.219: 96.3466% ( 30) 00:14:35.665 4.219 - 4.243: 96.4135% ( 9) 00:14:35.665 4.243 - 4.267: 96.5768% ( 22) 00:14:35.665 4.267 - 4.290: 96.6808% ( 14) 00:14:35.665 4.290 - 4.314: 96.7699% ( 12) 00:14:35.665 4.314 - 4.338: 96.8367% ( 9) 00:14:35.665 4.338 - 4.361: 96.9258% ( 12) 00:14:35.665 4.361 - 4.385: 96.9704% ( 6) 00:14:35.665 4.385 - 4.409: 97.0149% ( 6) 00:14:35.665 4.409 - 4.433: 97.0669% ( 7) 00:14:35.665 4.433 - 4.456: 97.0892% ( 3) 00:14:35.665 4.456 - 4.480: 97.1337% ( 6) 00:14:35.665 4.480 - 4.504: 97.1709% ( 5) 00:14:35.665 4.504 - 4.527: 97.1931% ( 3) 00:14:35.665 4.527 - 4.551: 97.2006% ( 1) 00:14:35.665 4.551 - 4.575: 97.2303% ( 4) 00:14:35.665 4.599 - 4.622: 97.2525% ( 3) 00:14:35.665 4.622 - 4.646: 97.2600% ( 1) 00:14:35.665 4.646 - 4.670: 97.2897% ( 4) 00:14:35.665 4.670 - 4.693: 97.3714% ( 11) 00:14:35.665 4.693 - 4.717: 97.4382% ( 9) 00:14:35.665 4.717 - 4.741: 97.4827% ( 6) 00:14:35.665 4.741 - 4.764: 97.5273% ( 6) 00:14:35.665 4.764 - 4.788: 97.5718% ( 6) 00:14:35.665 4.788 - 4.812: 97.6312% ( 8) 00:14:35.665 4.812 - 4.836: 97.6758% ( 6) 00:14:35.665 4.836 - 4.859: 97.7055% ( 4) 00:14:35.665 4.859 - 4.883: 97.7575% ( 7) 00:14:35.665 4.883 - 4.907: 97.7872% ( 4) 00:14:35.665 4.907 - 4.930: 97.8169% ( 4) 00:14:35.665 4.930 - 4.954: 97.8540% ( 5) 00:14:35.665 4.954 - 4.978: 97.9060% ( 7) 00:14:35.665 4.978 - 5.001: 97.9357% ( 4) 00:14:35.665 5.001 - 5.025: 97.9728% ( 5) 00:14:35.665 5.025 - 5.049: 97.9802% ( 1) 00:14:35.665 5.049 - 5.073: 98.0174% ( 5) 00:14:35.666 5.073 - 5.096: 98.0397% ( 3) 00:14:35.666 5.096 - 5.120: 98.0916% ( 7) 00:14:35.666 5.144 - 5.167: 98.1139% ( 3) 00:14:35.666 5.167 - 5.191: 98.1362% ( 3) 00:14:35.666 5.191 - 5.215: 98.1733% ( 5) 00:14:35.666 5.215 - 5.239: 98.1882% ( 2) 00:14:35.666 5.239 - 5.262: 98.2104% ( 3) 00:14:35.666 5.262 - 5.286: 98.2253% ( 2) 00:14:35.666 5.286 - 5.310: 98.2476% ( 3) 00:14:35.666 5.333 - 5.357: 98.2698% ( 3) 00:14:35.666 5.357 - 5.381: 98.2921% ( 3) 00:14:35.666 5.381 - 5.404: 98.2995% ( 1) 00:14:35.666 5.404 - 5.428: 98.3070% ( 1) 00:14:35.666 5.452 - 5.476: 98.3292% ( 3) 00:14:35.666 5.476 - 5.499: 98.3367% ( 1) 00:14:35.666 5.499 - 5.523: 98.3515% ( 2) 00:14:35.666 5.570 - 5.594: 98.3590% ( 1) 00:14:35.666 5.594 - 5.618: 98.3664% ( 1) 00:14:35.666 5.618 - 5.641: 98.3738% ( 1) 00:14:35.666 5.641 - 5.665: 98.3887% ( 2) 00:14:35.666 5.665 - 5.689: 98.4035% ( 2) 00:14:35.666 5.689 - 5.713: 98.4109% ( 1) 00:14:35.666 5.713 - 5.736: 98.4332% ( 3) 00:14:35.666 5.736 - 5.760: 98.4406% ( 1) 00:14:35.666 5.760 - 5.784: 98.4481% ( 1) 00:14:35.666 5.879 - 5.902: 98.4555% ( 1) 00:14:35.666 6.021 - 6.044: 98.4629% ( 1) 00:14:35.666 6.116 - 6.163: 98.4703% ( 1) 00:14:35.666 6.163 - 6.210: 98.4852% ( 2) 00:14:35.666 6.495 - 6.542: 98.4926% ( 1) 00:14:35.666 6.637 - 6.684: 98.5000% ( 1) 00:14:35.666 6.684 - 6.732: 98.5075% ( 1) 00:14:35.666 6.732 - 6.779: 98.5149% ( 1) 00:14:35.666 6.827 - 6.874: 98.5223% ( 1) 00:14:35.666 7.159 - 7.206: 98.5446% ( 3) 00:14:35.666 7.206 - 7.253: 98.5594% ( 2) 00:14:35.666 7.253 - 7.301: 98.5669% ( 1) 00:14:35.666 7.348 - 7.396: 98.5817% ( 2) 00:14:35.666 7.396 - 7.443: 98.6040% ( 3) 00:14:35.666 7.490 - 7.538: 98.6114% ( 1) 00:14:35.666 7.538 - 7.585: 98.6337% ( 3) 00:14:35.666 7.585 - 7.633: 98.6634% ( 4) 00:14:35.666 7.633 - 7.680: 98.6931% ( 4) 00:14:35.666 7.680 - 7.727: 98.7228% ( 4) 00:14:35.666 7.775 - 7.822: 98.7377% ( 2) 00:14:35.666 7.822 - 7.870: 98.7451% ( 1) 00:14:35.666 7.917 - 7.964: 98.7525% ( 1) 00:14:35.666 7.964 - 8.012: 98.7674% ( 2) 00:14:35.666 8.154 - 8.201: 98.7748% ( 1) 00:14:35.666 8.296 - 8.344: 98.7896% ( 2) 00:14:35.666 8.344 - 8.391: 98.7971% ( 1) 00:14:35.666 8.391 - 8.439: 98.8045% ( 1) 00:14:35.666 8.439 - 8.486: 98.8193% ( 2) 00:14:35.666 8.533 - 8.581: 98.8268% ( 1) 00:14:35.666 8.676 - 8.723: 98.8342% ( 1) 00:14:35.666 8.818 - 8.865: 98.8416% ( 1) 00:14:35.666 8.913 - 8.960: 98.8490% ( 1) 00:14:35.666 9.055 - 9.102: 98.8713% ( 3) 00:14:35.666 9.150 - 9.197: 98.8787% ( 1) 00:14:35.666 9.197 - 9.244: 98.8862% ( 1) 00:14:35.666 9.671 - 9.719: 98.8936% ( 1) 00:14:35.666 9.861 - 9.908: 98.9010% ( 1) 00:14:35.666 10.098 - 10.145: 98.9084% ( 1) 00:14:35.666 10.335 - 10.382: 98.9159% ( 1) 00:14:35.666 10.382 - 10.430: 98.9233% ( 1) 00:14:35.666 10.477 - 10.524: 98.9307% ( 1) 00:14:35.666 11.283 - 11.330: 98.9381% ( 1) 00:14:35.666 11.473 - 11.520: 98.9456% ( 1) 00:14:35.666 11.804 - 11.852: 98.9530% ( 1) 00:14:35.666 11.852 - 11.899: 98.9604% ( 1) 00:14:35.666 12.136 - 12.231: 98.9678% ( 1) 00:14:35.666 12.610 - 12.705: 98.9753% ( 1) 00:14:35.666 12.800 - 12.895: 98.9827% ( 1) 00:14:35.666 12.990 - 13.084: 98.9901% ( 1) 00:14:35.666 13.464 - 13.559: 98.9975% ( 1) 00:14:35.666 13.653 - 13.748: 99.0124% ( 2) 00:14:35.666 14.791 - 14.886: 99.0198% ( 1) 00:14:35.666 14.886 - 14.981: 99.0347% ( 2) 00:14:35.666 15.265 - 15.360: 99.0421% ( 1) 00:14:35.666 15.455 - 15.550: 99.0495% ( 1) 00:14:35.666 15.929 - 16.024: 99.0570% ( 1) 00:14:35.666 16.687 - 16.782: 99.0644% ( 1) 00:14:35.666 17.256 - 17.351: 99.0792% ( 2) 00:14:35.666 17.351 - 17.446: 99.0867% ( 1) 00:14:35.666 17.541 - 17.636: 99.1164% ( 4) 00:14:35.666 17.636 - 17.730: 99.1832% ( 9) 00:14:35.666 17.730 - 17.825: 99.2277% ( 6) 00:14:35.666 17.825 - 17.920: 99.2946% ( 9) 00:14:35.666 17.920 - 18.015: 99.3391% ( 6) 00:14:35.666 18.015 - 18.110: 99.4060% ( 9) 00:14:35.666 18.110 - 18.204: 99.4654% ( 8) 00:14:35.666 18.204 - 18.299: 99.5322% ( 9) 00:14:35.666 18.299 - 18.394: 99.5990% ( 9) 00:14:35.666 18.394 - 18.489: 99.6807% ( 11) 00:14:35.666 18.489 - 18.584: 99.7178% ( 5) 00:14:35.666 18.584 - 18.679: 99.7327% ( 2) 00:14:35.666 18.679 - 18.773: 99.7401% ( 1) 00:14:35.666 18.773 - 18.868: 99.7624% ( 3) 00:14:35.666 18.868 - 18.963: 99.7921% ( 4) 00:14:35.666 18.963 - 19.058: 99.7995% ( 1) 00:14:35.666 19.058 - 19.153: 99.8069% ( 1) 00:14:35.666 19.247 - 19.342: 99.8144% ( 1) 00:14:35.666 19.342 - 19.437: 99.8218% ( 1) 00:14:35.666 19.532 - 19.627: 99.8292% ( 1) 00:14:35.666 19.627 - 19.721: 99.8441% ( 2) 00:14:35.666 19.721 - 19.816: 99.8589% ( 2) 00:14:35.666 19.816 - 19.911: 99.8663% ( 1) 00:14:35.666 20.196 - 20.290: 99.8812% ( 2) 00:14:35.666 21.049 - 21.144: 99.8886% ( 1) 00:14:35.666 23.324 - 23.419: 99.9035% ( 2) 00:14:35.666 3980.705 - 4004.978: 99.9703% ( 9) 00:14:35.666 4004.978 - 4029.250: 100.0000% ( 4) 00:14:35.666 00:14:35.666 Complete histogram 00:14:35.666 ================== 00:14:35.666 Range in us Cumulative Count 00:14:35.666 2.074 - 2.086: 0.0965% ( 13) 00:14:35.666 2.086 - 2.098: 18.6753% ( 2502) 00:14:35.666 2.098 - 2.110: 42.5410% ( 3214) 00:14:35.666 2.110 - 2.121: 45.3033% ( 372) 00:14:35.666 2.121 - 2.133: 54.5333% ( 1243) 00:14:35.666 2.133 - 2.145: 58.8327% ( 579) 00:14:35.666 2.145 - 2.157: 60.5703% ( 234) 00:14:35.666 2.157 - 2.169: 70.3943% ( 1323) 00:14:35.666 2.169 - 2.181: 75.5476% ( 694) 00:14:35.666 2.181 - 2.193: 76.7357% ( 160) 00:14:35.666 2.193 - 2.204: 80.0401% ( 445) 00:14:35.666 2.204 - 2.216: 81.6217% ( 213) 00:14:35.666 2.216 - 2.228: 82.4014% ( 105) 00:14:35.666 2.228 - 2.240: 85.6909% ( 443) 00:14:35.666 2.240 - 2.252: 90.1240% ( 597) 00:14:35.666 2.252 - 2.264: 91.5720% ( 195) 00:14:35.666 2.264 - 2.276: 92.8863% ( 177) 00:14:35.666 2.276 - 2.287: 93.7551% ( 117) 00:14:35.666 2.287 - 2.299: 94.0373% ( 38) 00:14:35.666 2.299 - 2.311: 94.3566% ( 43) 00:14:35.666 2.311 - 2.323: 94.9952% ( 86) 00:14:35.666 2.323 - 2.335: 95.4407% ( 60) 00:14:35.666 2.335 - 2.347: 95.5521% ( 15) 00:14:35.666 2.347 - 2.359: 95.5966% ( 6) 00:14:35.666 2.359 - 2.370: 95.6783% ( 11) 00:14:35.666 2.370 - 2.382: 95.8120% ( 18) 00:14:35.666 2.382 - 2.394: 96.1907% ( 51) 00:14:35.666 2.394 - 2.406: 96.5917% ( 54) 00:14:35.666 2.406 - 2.418: 96.8516% ( 35) 00:14:35.666 2.418 - 2.430: 97.0224% ( 23) 00:14:35.666 2.430 - 2.441: 97.1263% ( 14) 00:14:35.666 2.441 - 2.453: 97.2154% ( 12) 00:14:35.666 2.453 - 2.465: 97.3639% ( 20) 00:14:35.666 2.465 - 2.477: 97.5421% ( 24) 00:14:35.666 2.477 - 2.489: 97.6461% ( 14) 00:14:35.666 2.489 - 2.501: 97.7798% ( 18) 00:14:35.666 2.501 - 2.513: 97.8614% ( 11) 00:14:35.666 2.513 - 2.524: 97.9208% ( 8) 00:14:35.666 2.524 - 2.536: 97.9654% ( 6) 00:14:35.666 2.536 - 2.548: 97.9877% ( 3) 00:14:35.666 2.548 - 2.560: 98.0025% ( 2) 00:14:35.666 2.560 - 2.572: 98.0100% ( 1) 00:14:35.666 2.572 - 2.584: 98.0322% ( 3) 00:14:35.666 2.584 - 2.596: 98.0397% ( 1) 00:14:35.666 2.596 - 2.607: 98.0545% ( 2) 00:14:35.666 2.607 - 2.619: 98.0694% ( 2) 00:14:35.666 2.631 - 2.643: 98.0842% ( 2) 00:14:35.666 2.643 - 2.655: 98.0916% ( 1) 00:14:35.666 2.655 - 2.667: 98.1065% ( 2) 00:14:35.666 2.667 - 2.679: 98.1139% ( 1) 00:14:35.666 2.679 - 2.690: 98.1362% ( 3) 00:14:35.666 2.690 - 2.702: 98.1436% ( 1) 00:14:35.666 2.702 - 2.714: 98.1659% ( 3) 00:14:35.666 2.714 - 2.726: 98.1807% ( 2) 00:14:35.666 2.726 - 2.738: 98.1882% ( 1) 00:14:35.666 2.738 - 2.750: 98.2104% ( 3) 00:14:35.666 2.750 - 2.761: 98.2253% ( 2) 00:14:35.666 2.773 - 2.785: 98.2327% ( 1) 00:14:35.666 2.785 - 2.797: 98.2401% ( 1) 00:14:35.666 2.833 - 2.844: 98.2476% ( 1) 00:14:35.666 2.856 - 2.868: 98.2550% ( 1) 00:14:35.666 2.892 - 2.904: 98.2624% ( 1) 00:14:35.666 2.927 - 2.939: 98.2698% ( 1) 00:14:35.666 2.939 - 2.951: 98.2773% ( 1) 00:14:35.666 2.951 - 2.963: 98.2921% ( 2) 00:14:35.667 2.975 - 2.987: 98.3070% ( 2) 00:14:35.667 2.999 - 3.010: 98.3144% ( 1) 00:14:35.667 3.022 - 3.034: 98.3218% ( 1) 00:14:35.667 3.034 - 3.058: 98.3367% ( 2) 00:14:35.667 3.058 - 3.081: 98.3590% ( 3) 00:14:35.667 3.081 - 3.105: 98.3664% ( 1) 00:14:35.667 3.105 - 3.129: 98.3738% ( 1) 00:14:35.667 3.153 - 3.176: 98.3812% ( 1) 00:14:35.667 3.176 - 3.200: 98.3961% ( 2) 00:14:35.667 3.200 - 3.224: 9[2024-07-13 07:02:05.100626] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:35.924 8.4035% ( 1) 00:14:35.924 3.247 - 3.271: 98.4109% ( 1) 00:14:35.924 3.295 - 3.319: 98.4184% ( 1) 00:14:35.924 3.319 - 3.342: 98.4406% ( 3) 00:14:35.924 3.342 - 3.366: 98.4703% ( 4) 00:14:35.924 3.366 - 3.390: 98.4926% ( 3) 00:14:35.924 3.390 - 3.413: 98.5000% ( 1) 00:14:35.924 3.413 - 3.437: 98.5223% ( 3) 00:14:35.924 3.437 - 3.461: 98.5520% ( 4) 00:14:35.924 3.461 - 3.484: 98.5594% ( 1) 00:14:35.924 3.484 - 3.508: 98.5817% ( 3) 00:14:35.924 3.532 - 3.556: 98.6188% ( 5) 00:14:35.924 3.556 - 3.579: 98.6263% ( 1) 00:14:35.924 3.579 - 3.603: 98.6337% ( 1) 00:14:35.924 3.603 - 3.627: 98.6634% ( 4) 00:14:35.924 3.674 - 3.698: 98.6857% ( 3) 00:14:35.924 3.698 - 3.721: 98.7005% ( 2) 00:14:35.924 3.721 - 3.745: 98.7154% ( 2) 00:14:35.924 3.745 - 3.769: 98.7228% ( 1) 00:14:35.924 3.769 - 3.793: 98.7377% ( 2) 00:14:35.924 3.816 - 3.840: 98.7525% ( 2) 00:14:35.924 3.911 - 3.935: 98.7599% ( 1) 00:14:35.924 3.935 - 3.959: 98.7674% ( 1) 00:14:35.924 3.982 - 4.006: 98.7748% ( 1) 00:14:35.924 4.006 - 4.030: 98.7822% ( 1) 00:14:35.924 4.053 - 4.077: 98.7896% ( 1) 00:14:35.924 4.124 - 4.148: 98.7971% ( 1) 00:14:35.924 4.764 - 4.788: 98.8045% ( 1) 00:14:35.924 4.907 - 4.930: 98.8119% ( 1) 00:14:35.924 5.570 - 5.594: 98.8268% ( 2) 00:14:35.924 5.618 - 5.641: 98.8342% ( 1) 00:14:35.924 5.713 - 5.736: 98.8416% ( 1) 00:14:35.924 5.831 - 5.855: 98.8490% ( 1) 00:14:35.924 5.879 - 5.902: 98.8639% ( 2) 00:14:35.924 5.997 - 6.021: 98.8713% ( 1) 00:14:35.924 6.068 - 6.116: 98.8862% ( 2) 00:14:35.924 6.163 - 6.210: 98.8936% ( 1) 00:14:35.924 6.210 - 6.258: 98.9010% ( 1) 00:14:35.924 6.258 - 6.305: 98.9084% ( 1) 00:14:35.924 6.305 - 6.353: 98.9159% ( 1) 00:14:35.924 6.353 - 6.400: 98.9307% ( 2) 00:14:35.924 6.400 - 6.447: 98.9381% ( 1) 00:14:35.924 8.296 - 8.344: 98.9456% ( 1) 00:14:35.924 9.813 - 9.861: 98.9530% ( 1) 00:14:35.924 10.335 - 10.382: 98.9604% ( 1) 00:14:35.924 11.141 - 11.188: 98.9678% ( 1) 00:14:35.924 15.739 - 15.834: 98.9753% ( 1) 00:14:35.924 15.834 - 15.929: 99.0050% ( 4) 00:14:35.924 15.929 - 16.024: 99.0198% ( 2) 00:14:35.924 16.024 - 16.119: 99.0347% ( 2) 00:14:35.924 16.119 - 16.213: 99.0792% ( 6) 00:14:35.924 16.213 - 16.308: 99.1015% ( 3) 00:14:35.924 16.308 - 16.403: 99.1461% ( 6) 00:14:35.924 16.403 - 16.498: 99.1683% ( 3) 00:14:35.924 16.498 - 16.593: 99.2055% ( 5) 00:14:35.924 16.593 - 16.687: 99.2500% ( 6) 00:14:35.924 16.687 - 16.782: 99.2797% ( 4) 00:14:35.924 16.782 - 16.877: 99.3243% ( 6) 00:14:35.924 16.877 - 16.972: 99.3614% ( 5) 00:14:35.924 16.972 - 17.067: 99.3763% ( 2) 00:14:35.924 17.067 - 17.161: 99.3837% ( 1) 00:14:35.924 17.351 - 17.446: 99.3911% ( 1) 00:14:35.924 17.446 - 17.541: 99.3985% ( 1) 00:14:35.924 17.825 - 17.920: 99.4060% ( 1) 00:14:35.924 18.204 - 18.299: 99.4134% ( 1) 00:14:35.924 18.299 - 18.394: 99.4208% ( 1) 00:14:35.924 18.868 - 18.963: 99.4282% ( 1) 00:14:35.924 19.153 - 19.247: 99.4357% ( 1) 00:14:35.924 3980.705 - 4004.978: 99.9554% ( 70) 00:14:35.924 4004.978 - 4029.250: 100.0000% ( 6) 00:14:35.924 00:14:35.924 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:35.924 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:35.924 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:35.924 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:35.924 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:36.182 [ 00:14:36.182 { 00:14:36.182 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:36.182 "subtype": "Discovery", 00:14:36.182 "listen_addresses": [], 00:14:36.182 "allow_any_host": true, 00:14:36.182 "hosts": [] 00:14:36.182 }, 00:14:36.182 { 00:14:36.182 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:36.182 "subtype": "NVMe", 00:14:36.182 "listen_addresses": [ 00:14:36.182 { 00:14:36.182 "trtype": "VFIOUSER", 00:14:36.182 "adrfam": "IPv4", 00:14:36.182 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:36.182 "trsvcid": "0" 00:14:36.182 } 00:14:36.182 ], 00:14:36.182 "allow_any_host": true, 00:14:36.182 "hosts": [], 00:14:36.182 "serial_number": "SPDK1", 00:14:36.182 "model_number": "SPDK bdev Controller", 00:14:36.182 "max_namespaces": 32, 00:14:36.182 "min_cntlid": 1, 00:14:36.182 "max_cntlid": 65519, 00:14:36.182 "namespaces": [ 00:14:36.182 { 00:14:36.182 "nsid": 1, 00:14:36.182 "bdev_name": "Malloc1", 00:14:36.182 "name": "Malloc1", 00:14:36.182 "nguid": "F85661A8155F426B9D4B108DCC119F6F", 00:14:36.182 "uuid": "f85661a8-155f-426b-9d4b-108dcc119f6f" 00:14:36.182 } 00:14:36.182 ] 00:14:36.182 }, 00:14:36.182 { 00:14:36.182 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:36.182 "subtype": "NVMe", 00:14:36.182 "listen_addresses": [ 00:14:36.182 { 00:14:36.182 "trtype": "VFIOUSER", 00:14:36.182 "adrfam": "IPv4", 00:14:36.182 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:36.182 "trsvcid": "0" 00:14:36.182 } 00:14:36.182 ], 00:14:36.182 "allow_any_host": true, 00:14:36.182 "hosts": [], 00:14:36.182 "serial_number": "SPDK2", 00:14:36.182 "model_number": "SPDK bdev Controller", 00:14:36.182 "max_namespaces": 32, 00:14:36.182 "min_cntlid": 1, 00:14:36.182 "max_cntlid": 65519, 00:14:36.182 "namespaces": [ 00:14:36.182 { 00:14:36.182 "nsid": 1, 00:14:36.182 "bdev_name": "Malloc2", 00:14:36.182 "name": "Malloc2", 00:14:36.182 "nguid": "80BD3AF6611645BFAE5F214C3FC8CF43", 00:14:36.182 "uuid": "80bd3af6-6116-45bf-ae5f-214c3fc8cf43" 00:14:36.182 } 00:14:36.182 ] 00:14:36.182 } 00:14:36.182 ] 00:14:36.182 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:36.182 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1477769 00:14:36.182 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:36.182 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:36.182 07:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:36.182 07:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:36.182 07:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:36.182 07:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:36.182 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:36.182 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:36.182 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.182 [2024-07-13 07:02:05.593367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:36.439 Malloc3 00:14:36.439 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:36.695 [2024-07-13 07:02:05.963079] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:36.695 07:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:36.695 Asynchronous Event Request test 00:14:36.695 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:36.695 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:36.695 Registering asynchronous event callbacks... 00:14:36.695 Starting namespace attribute notice tests for all controllers... 00:14:36.695 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:36.695 aer_cb - Changed Namespace 00:14:36.695 Cleaning up... 00:14:36.954 [ 00:14:36.954 { 00:14:36.954 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:36.954 "subtype": "Discovery", 00:14:36.954 "listen_addresses": [], 00:14:36.954 "allow_any_host": true, 00:14:36.954 "hosts": [] 00:14:36.954 }, 00:14:36.954 { 00:14:36.954 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:36.954 "subtype": "NVMe", 00:14:36.954 "listen_addresses": [ 00:14:36.954 { 00:14:36.954 "trtype": "VFIOUSER", 00:14:36.954 "adrfam": "IPv4", 00:14:36.954 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:36.954 "trsvcid": "0" 00:14:36.954 } 00:14:36.954 ], 00:14:36.954 "allow_any_host": true, 00:14:36.954 "hosts": [], 00:14:36.954 "serial_number": "SPDK1", 00:14:36.954 "model_number": "SPDK bdev Controller", 00:14:36.954 "max_namespaces": 32, 00:14:36.954 "min_cntlid": 1, 00:14:36.954 "max_cntlid": 65519, 00:14:36.954 "namespaces": [ 00:14:36.954 { 00:14:36.954 "nsid": 1, 00:14:36.954 "bdev_name": "Malloc1", 00:14:36.954 "name": "Malloc1", 00:14:36.954 "nguid": "F85661A8155F426B9D4B108DCC119F6F", 00:14:36.954 "uuid": "f85661a8-155f-426b-9d4b-108dcc119f6f" 00:14:36.954 }, 00:14:36.954 { 00:14:36.954 "nsid": 2, 00:14:36.954 "bdev_name": "Malloc3", 00:14:36.954 "name": "Malloc3", 00:14:36.954 "nguid": "B413815FC3E3466D8B31B5AA26BD1298", 00:14:36.954 "uuid": "b413815f-c3e3-466d-8b31-b5aa26bd1298" 00:14:36.954 } 00:14:36.954 ] 00:14:36.954 }, 00:14:36.954 { 00:14:36.954 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:36.954 "subtype": "NVMe", 00:14:36.954 "listen_addresses": [ 00:14:36.954 { 00:14:36.954 "trtype": "VFIOUSER", 00:14:36.954 "adrfam": "IPv4", 00:14:36.954 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:36.954 "trsvcid": "0" 00:14:36.954 } 00:14:36.954 ], 00:14:36.954 "allow_any_host": true, 00:14:36.954 "hosts": [], 00:14:36.954 "serial_number": "SPDK2", 00:14:36.954 "model_number": "SPDK bdev Controller", 00:14:36.954 "max_namespaces": 32, 00:14:36.954 "min_cntlid": 1, 00:14:36.954 "max_cntlid": 65519, 00:14:36.954 "namespaces": [ 00:14:36.954 { 00:14:36.954 "nsid": 1, 00:14:36.954 "bdev_name": "Malloc2", 00:14:36.954 "name": "Malloc2", 00:14:36.954 "nguid": "80BD3AF6611645BFAE5F214C3FC8CF43", 00:14:36.954 "uuid": "80bd3af6-6116-45bf-ae5f-214c3fc8cf43" 00:14:36.954 } 00:14:36.954 ] 00:14:36.954 } 00:14:36.954 ] 00:14:36.954 07:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1477769 00:14:36.954 07:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:36.954 07:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:36.954 07:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:36.954 07:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:36.954 [2024-07-13 07:02:06.231723] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:36.954 [2024-07-13 07:02:06.231758] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477905 ] 00:14:36.954 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.954 [2024-07-13 07:02:06.248383] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:36.954 [2024-07-13 07:02:06.265956] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:36.954 [2024-07-13 07:02:06.272182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:36.954 [2024-07-13 07:02:06.272215] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f20b1ea0000 00:14:36.955 [2024-07-13 07:02:06.273152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:36.955 [2024-07-13 07:02:06.274183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:36.955 [2024-07-13 07:02:06.275185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:36.955 [2024-07-13 07:02:06.276194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:36.955 [2024-07-13 07:02:06.277198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:36.955 [2024-07-13 07:02:06.278193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:36.955 [2024-07-13 07:02:06.279211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:36.955 [2024-07-13 07:02:06.280211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:36.955 [2024-07-13 07:02:06.281218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:36.955 [2024-07-13 07:02:06.281239] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f20b0c62000 00:14:36.955 [2024-07-13 07:02:06.282350] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:36.955 [2024-07-13 07:02:06.296547] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:36.955 [2024-07-13 07:02:06.296581] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:36.955 [2024-07-13 07:02:06.301682] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:36.955 [2024-07-13 07:02:06.301731] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:36.955 [2024-07-13 07:02:06.301808] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:36.955 [2024-07-13 07:02:06.301829] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:36.955 [2024-07-13 07:02:06.301839] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:36.955 [2024-07-13 07:02:06.302683] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:36.955 [2024-07-13 07:02:06.302703] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:36.955 [2024-07-13 07:02:06.302715] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:36.955 [2024-07-13 07:02:06.303686] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:36.955 [2024-07-13 07:02:06.303705] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:36.955 [2024-07-13 07:02:06.303718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:36.955 [2024-07-13 07:02:06.304698] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:36.955 [2024-07-13 07:02:06.304717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:36.955 [2024-07-13 07:02:06.305708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:36.955 [2024-07-13 07:02:06.305727] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:36.955 [2024-07-13 07:02:06.305736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:36.955 [2024-07-13 07:02:06.305747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:36.955 [2024-07-13 07:02:06.305862] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:36.955 [2024-07-13 07:02:06.305878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:36.955 [2024-07-13 07:02:06.305886] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:36.955 [2024-07-13 07:02:06.306712] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:36.955 [2024-07-13 07:02:06.307714] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:36.955 [2024-07-13 07:02:06.308720] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:36.955 [2024-07-13 07:02:06.309720] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:36.955 [2024-07-13 07:02:06.309796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:36.955 [2024-07-13 07:02:06.310735] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:36.955 [2024-07-13 07:02:06.310754] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:36.955 [2024-07-13 07:02:06.310762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.310785] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:36.955 [2024-07-13 07:02:06.310802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.310821] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:36.955 [2024-07-13 07:02:06.310830] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:36.955 [2024-07-13 07:02:06.310862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:36.955 [2024-07-13 07:02:06.318877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:36.955 [2024-07-13 07:02:06.318899] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:36.955 [2024-07-13 07:02:06.318912] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:36.955 [2024-07-13 07:02:06.318921] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:36.955 [2024-07-13 07:02:06.318929] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:36.955 [2024-07-13 07:02:06.318937] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:36.955 [2024-07-13 07:02:06.318945] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:36.955 [2024-07-13 07:02:06.318953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.318966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.318981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:36.955 [2024-07-13 07:02:06.326878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:36.955 [2024-07-13 07:02:06.326906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.955 [2024-07-13 07:02:06.326921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.955 [2024-07-13 07:02:06.326933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.955 [2024-07-13 07:02:06.326945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.955 [2024-07-13 07:02:06.326954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.326969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.326983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:36.955 [2024-07-13 07:02:06.334878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:36.955 [2024-07-13 07:02:06.334896] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:36.955 [2024-07-13 07:02:06.334905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.334916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.334925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.334939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:36.955 [2024-07-13 07:02:06.342874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:36.955 [2024-07-13 07:02:06.342960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.342976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.342989] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:36.955 [2024-07-13 07:02:06.342998] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:36.955 [2024-07-13 07:02:06.343008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:36.955 [2024-07-13 07:02:06.350891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:36.955 [2024-07-13 07:02:06.350914] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:36.955 [2024-07-13 07:02:06.350929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.350943] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:36.955 [2024-07-13 07:02:06.350955] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:36.955 [2024-07-13 07:02:06.350966] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:36.955 [2024-07-13 07:02:06.350976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:36.955 [2024-07-13 07:02:06.358877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:36.955 [2024-07-13 07:02:06.358904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:36.956 [2024-07-13 07:02:06.358919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:36.956 [2024-07-13 07:02:06.358932] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:36.956 [2024-07-13 07:02:06.358940] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:36.956 [2024-07-13 07:02:06.358950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:36.956 [2024-07-13 07:02:06.366875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:36.956 [2024-07-13 07:02:06.366896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:36.956 [2024-07-13 07:02:06.366908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:36.956 [2024-07-13 07:02:06.366922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:36.956 [2024-07-13 07:02:06.366932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:36.956 [2024-07-13 07:02:06.366940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:36.956 [2024-07-13 07:02:06.366948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:36.956 [2024-07-13 07:02:06.366956] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:36.956 [2024-07-13 07:02:06.366963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:36.956 [2024-07-13 07:02:06.366971] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:36.956 [2024-07-13 07:02:06.366995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:36.956 [2024-07-13 07:02:06.374877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:36.956 [2024-07-13 07:02:06.374902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:36.956 [2024-07-13 07:02:06.382875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:36.956 [2024-07-13 07:02:06.382901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:36.956 [2024-07-13 07:02:06.390874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:36.956 [2024-07-13 07:02:06.390900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:36.956 [2024-07-13 07:02:06.398877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:36.956 [2024-07-13 07:02:06.398909] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:36.956 [2024-07-13 07:02:06.398920] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:36.956 [2024-07-13 07:02:06.398927] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:36.956 [2024-07-13 07:02:06.398933] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:36.956 [2024-07-13 07:02:06.398942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:36.956 [2024-07-13 07:02:06.398954] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:36.956 [2024-07-13 07:02:06.398962] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:36.956 [2024-07-13 07:02:06.398971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:36.956 [2024-07-13 07:02:06.398981] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:36.956 [2024-07-13 07:02:06.398989] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:36.956 [2024-07-13 07:02:06.398998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:36.956 [2024-07-13 07:02:06.399009] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:36.956 [2024-07-13 07:02:06.399017] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:36.956 [2024-07-13 07:02:06.399026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:36.956 [2024-07-13 07:02:06.406882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:36.956 [2024-07-13 07:02:06.406912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:36.956 [2024-07-13 07:02:06.406931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:36.956 [2024-07-13 07:02:06.406943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:36.956 ===================================================== 00:14:36.956 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:36.956 ===================================================== 00:14:36.956 Controller Capabilities/Features 00:14:36.956 ================================ 00:14:36.956 Vendor ID: 4e58 00:14:36.956 Subsystem Vendor ID: 4e58 00:14:36.956 Serial Number: SPDK2 00:14:36.956 Model Number: SPDK bdev Controller 00:14:36.956 Firmware Version: 24.09 00:14:36.956 Recommended Arb Burst: 6 00:14:36.956 IEEE OUI Identifier: 8d 6b 50 00:14:36.956 Multi-path I/O 00:14:36.956 May have multiple subsystem ports: Yes 00:14:36.956 May have multiple controllers: Yes 00:14:36.956 Associated with SR-IOV VF: No 00:14:36.956 Max Data Transfer Size: 131072 00:14:36.956 Max Number of Namespaces: 32 00:14:36.956 Max Number of I/O Queues: 127 00:14:36.956 NVMe Specification Version (VS): 1.3 00:14:36.956 NVMe Specification Version (Identify): 1.3 00:14:36.956 Maximum Queue Entries: 256 00:14:36.956 Contiguous Queues Required: Yes 00:14:36.956 Arbitration Mechanisms Supported 00:14:36.956 Weighted Round Robin: Not Supported 00:14:36.956 Vendor Specific: Not Supported 00:14:36.956 Reset Timeout: 15000 ms 00:14:36.956 Doorbell Stride: 4 bytes 00:14:36.956 NVM Subsystem Reset: Not Supported 00:14:36.956 Command Sets Supported 00:14:36.956 NVM Command Set: Supported 00:14:36.956 Boot Partition: Not Supported 00:14:36.956 Memory Page Size Minimum: 4096 bytes 00:14:36.956 Memory Page Size Maximum: 4096 bytes 00:14:36.956 Persistent Memory Region: Not Supported 00:14:36.956 Optional Asynchronous Events Supported 00:14:36.956 Namespace Attribute Notices: Supported 00:14:36.956 Firmware Activation Notices: Not Supported 00:14:36.956 ANA Change Notices: Not Supported 00:14:36.956 PLE Aggregate Log Change Notices: Not Supported 00:14:36.956 LBA Status Info Alert Notices: Not Supported 00:14:36.956 EGE Aggregate Log Change Notices: Not Supported 00:14:36.956 Normal NVM Subsystem Shutdown event: Not Supported 00:14:36.956 Zone Descriptor Change Notices: Not Supported 00:14:36.956 Discovery Log Change Notices: Not Supported 00:14:36.956 Controller Attributes 00:14:36.956 128-bit Host Identifier: Supported 00:14:36.956 Non-Operational Permissive Mode: Not Supported 00:14:36.956 NVM Sets: Not Supported 00:14:36.956 Read Recovery Levels: Not Supported 00:14:36.956 Endurance Groups: Not Supported 00:14:36.956 Predictable Latency Mode: Not Supported 00:14:36.956 Traffic Based Keep ALive: Not Supported 00:14:36.956 Namespace Granularity: Not Supported 00:14:36.956 SQ Associations: Not Supported 00:14:36.956 UUID List: Not Supported 00:14:36.956 Multi-Domain Subsystem: Not Supported 00:14:36.956 Fixed Capacity Management: Not Supported 00:14:36.956 Variable Capacity Management: Not Supported 00:14:36.956 Delete Endurance Group: Not Supported 00:14:36.956 Delete NVM Set: Not Supported 00:14:36.956 Extended LBA Formats Supported: Not Supported 00:14:36.956 Flexible Data Placement Supported: Not Supported 00:14:36.956 00:14:36.956 Controller Memory Buffer Support 00:14:36.956 ================================ 00:14:36.956 Supported: No 00:14:36.956 00:14:36.956 Persistent Memory Region Support 00:14:36.956 ================================ 00:14:36.956 Supported: No 00:14:36.956 00:14:36.956 Admin Command Set Attributes 00:14:36.956 ============================ 00:14:36.956 Security Send/Receive: Not Supported 00:14:36.956 Format NVM: Not Supported 00:14:36.956 Firmware Activate/Download: Not Supported 00:14:36.956 Namespace Management: Not Supported 00:14:36.956 Device Self-Test: Not Supported 00:14:36.956 Directives: Not Supported 00:14:36.956 NVMe-MI: Not Supported 00:14:36.956 Virtualization Management: Not Supported 00:14:36.956 Doorbell Buffer Config: Not Supported 00:14:36.956 Get LBA Status Capability: Not Supported 00:14:36.956 Command & Feature Lockdown Capability: Not Supported 00:14:36.956 Abort Command Limit: 4 00:14:36.956 Async Event Request Limit: 4 00:14:36.956 Number of Firmware Slots: N/A 00:14:36.956 Firmware Slot 1 Read-Only: N/A 00:14:36.956 Firmware Activation Without Reset: N/A 00:14:36.956 Multiple Update Detection Support: N/A 00:14:36.956 Firmware Update Granularity: No Information Provided 00:14:36.956 Per-Namespace SMART Log: No 00:14:36.956 Asymmetric Namespace Access Log Page: Not Supported 00:14:36.956 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:36.956 Command Effects Log Page: Supported 00:14:36.956 Get Log Page Extended Data: Supported 00:14:36.956 Telemetry Log Pages: Not Supported 00:14:36.956 Persistent Event Log Pages: Not Supported 00:14:36.956 Supported Log Pages Log Page: May Support 00:14:36.956 Commands Supported & Effects Log Page: Not Supported 00:14:36.956 Feature Identifiers & Effects Log Page:May Support 00:14:36.956 NVMe-MI Commands & Effects Log Page: May Support 00:14:36.956 Data Area 4 for Telemetry Log: Not Supported 00:14:36.956 Error Log Page Entries Supported: 128 00:14:36.956 Keep Alive: Supported 00:14:36.956 Keep Alive Granularity: 10000 ms 00:14:36.956 00:14:36.956 NVM Command Set Attributes 00:14:36.956 ========================== 00:14:36.956 Submission Queue Entry Size 00:14:36.956 Max: 64 00:14:36.956 Min: 64 00:14:36.957 Completion Queue Entry Size 00:14:36.957 Max: 16 00:14:36.957 Min: 16 00:14:36.957 Number of Namespaces: 32 00:14:36.957 Compare Command: Supported 00:14:36.957 Write Uncorrectable Command: Not Supported 00:14:36.957 Dataset Management Command: Supported 00:14:36.957 Write Zeroes Command: Supported 00:14:36.957 Set Features Save Field: Not Supported 00:14:36.957 Reservations: Not Supported 00:14:36.957 Timestamp: Not Supported 00:14:36.957 Copy: Supported 00:14:36.957 Volatile Write Cache: Present 00:14:36.957 Atomic Write Unit (Normal): 1 00:14:36.957 Atomic Write Unit (PFail): 1 00:14:36.957 Atomic Compare & Write Unit: 1 00:14:36.957 Fused Compare & Write: Supported 00:14:36.957 Scatter-Gather List 00:14:36.957 SGL Command Set: Supported (Dword aligned) 00:14:36.957 SGL Keyed: Not Supported 00:14:36.957 SGL Bit Bucket Descriptor: Not Supported 00:14:36.957 SGL Metadata Pointer: Not Supported 00:14:36.957 Oversized SGL: Not Supported 00:14:36.957 SGL Metadata Address: Not Supported 00:14:36.957 SGL Offset: Not Supported 00:14:36.957 Transport SGL Data Block: Not Supported 00:14:36.957 Replay Protected Memory Block: Not Supported 00:14:36.957 00:14:36.957 Firmware Slot Information 00:14:36.957 ========================= 00:14:36.957 Active slot: 1 00:14:36.957 Slot 1 Firmware Revision: 24.09 00:14:36.957 00:14:36.957 00:14:36.957 Commands Supported and Effects 00:14:36.957 ============================== 00:14:36.957 Admin Commands 00:14:36.957 -------------- 00:14:36.957 Get Log Page (02h): Supported 00:14:36.957 Identify (06h): Supported 00:14:36.957 Abort (08h): Supported 00:14:36.957 Set Features (09h): Supported 00:14:36.957 Get Features (0Ah): Supported 00:14:36.957 Asynchronous Event Request (0Ch): Supported 00:14:36.957 Keep Alive (18h): Supported 00:14:36.957 I/O Commands 00:14:36.957 ------------ 00:14:36.957 Flush (00h): Supported LBA-Change 00:14:36.957 Write (01h): Supported LBA-Change 00:14:36.957 Read (02h): Supported 00:14:36.957 Compare (05h): Supported 00:14:36.957 Write Zeroes (08h): Supported LBA-Change 00:14:36.957 Dataset Management (09h): Supported LBA-Change 00:14:36.957 Copy (19h): Supported LBA-Change 00:14:36.957 00:14:36.957 Error Log 00:14:36.957 ========= 00:14:36.957 00:14:36.957 Arbitration 00:14:36.957 =========== 00:14:36.957 Arbitration Burst: 1 00:14:36.957 00:14:36.957 Power Management 00:14:36.957 ================ 00:14:36.957 Number of Power States: 1 00:14:36.957 Current Power State: Power State #0 00:14:36.957 Power State #0: 00:14:36.957 Max Power: 0.00 W 00:14:36.957 Non-Operational State: Operational 00:14:36.957 Entry Latency: Not Reported 00:14:36.957 Exit Latency: Not Reported 00:14:36.957 Relative Read Throughput: 0 00:14:36.957 Relative Read Latency: 0 00:14:36.957 Relative Write Throughput: 0 00:14:36.957 Relative Write Latency: 0 00:14:36.957 Idle Power: Not Reported 00:14:36.957 Active Power: Not Reported 00:14:36.957 Non-Operational Permissive Mode: Not Supported 00:14:36.957 00:14:36.957 Health Information 00:14:36.957 ================== 00:14:36.957 Critical Warnings: 00:14:36.957 Available Spare Space: OK 00:14:36.957 Temperature: OK 00:14:36.957 Device Reliability: OK 00:14:36.957 Read Only: No 00:14:36.957 Volatile Memory Backup: OK 00:14:36.957 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:36.957 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:36.957 Available Spare: 0% 00:14:36.957 Available Sp[2024-07-13 07:02:06.407081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:37.214 [2024-07-13 07:02:06.414884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:37.214 [2024-07-13 07:02:06.414940] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:37.214 [2024-07-13 07:02:06.414958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.214 [2024-07-13 07:02:06.414969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.214 [2024-07-13 07:02:06.414978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.214 [2024-07-13 07:02:06.414988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.214 [2024-07-13 07:02:06.415053] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:37.214 [2024-07-13 07:02:06.415073] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:37.214 [2024-07-13 07:02:06.416059] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:37.214 [2024-07-13 07:02:06.416134] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:37.214 [2024-07-13 07:02:06.416149] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:37.214 [2024-07-13 07:02:06.417071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:37.214 [2024-07-13 07:02:06.417096] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:37.214 [2024-07-13 07:02:06.417165] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:37.214 [2024-07-13 07:02:06.418393] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:37.214 are Threshold: 0% 00:14:37.214 Life Percentage Used: 0% 00:14:37.214 Data Units Read: 0 00:14:37.214 Data Units Written: 0 00:14:37.214 Host Read Commands: 0 00:14:37.214 Host Write Commands: 0 00:14:37.214 Controller Busy Time: 0 minutes 00:14:37.214 Power Cycles: 0 00:14:37.214 Power On Hours: 0 hours 00:14:37.214 Unsafe Shutdowns: 0 00:14:37.214 Unrecoverable Media Errors: 0 00:14:37.214 Lifetime Error Log Entries: 0 00:14:37.214 Warning Temperature Time: 0 minutes 00:14:37.214 Critical Temperature Time: 0 minutes 00:14:37.214 00:14:37.214 Number of Queues 00:14:37.214 ================ 00:14:37.214 Number of I/O Submission Queues: 127 00:14:37.214 Number of I/O Completion Queues: 127 00:14:37.214 00:14:37.214 Active Namespaces 00:14:37.214 ================= 00:14:37.214 Namespace ID:1 00:14:37.214 Error Recovery Timeout: Unlimited 00:14:37.214 Command Set Identifier: NVM (00h) 00:14:37.214 Deallocate: Supported 00:14:37.214 Deallocated/Unwritten Error: Not Supported 00:14:37.214 Deallocated Read Value: Unknown 00:14:37.214 Deallocate in Write Zeroes: Not Supported 00:14:37.214 Deallocated Guard Field: 0xFFFF 00:14:37.214 Flush: Supported 00:14:37.214 Reservation: Supported 00:14:37.214 Namespace Sharing Capabilities: Multiple Controllers 00:14:37.214 Size (in LBAs): 131072 (0GiB) 00:14:37.214 Capacity (in LBAs): 131072 (0GiB) 00:14:37.214 Utilization (in LBAs): 131072 (0GiB) 00:14:37.214 NGUID: 80BD3AF6611645BFAE5F214C3FC8CF43 00:14:37.214 UUID: 80bd3af6-6116-45bf-ae5f-214c3fc8cf43 00:14:37.214 Thin Provisioning: Not Supported 00:14:37.214 Per-NS Atomic Units: Yes 00:14:37.214 Atomic Boundary Size (Normal): 0 00:14:37.214 Atomic Boundary Size (PFail): 0 00:14:37.214 Atomic Boundary Offset: 0 00:14:37.214 Maximum Single Source Range Length: 65535 00:14:37.214 Maximum Copy Length: 65535 00:14:37.214 Maximum Source Range Count: 1 00:14:37.215 NGUID/EUI64 Never Reused: No 00:14:37.215 Namespace Write Protected: No 00:14:37.215 Number of LBA Formats: 1 00:14:37.215 Current LBA Format: LBA Format #00 00:14:37.215 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:37.215 00:14:37.215 07:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:37.215 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.215 [2024-07-13 07:02:06.649543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:42.470 Initializing NVMe Controllers 00:14:42.470 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:42.470 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:42.470 Initialization complete. Launching workers. 00:14:42.470 ======================================================== 00:14:42.470 Latency(us) 00:14:42.470 Device Information : IOPS MiB/s Average min max 00:14:42.470 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34855.89 136.16 3671.52 1176.20 7620.05 00:14:42.470 ======================================================== 00:14:42.470 Total : 34855.89 136.16 3671.52 1176.20 7620.05 00:14:42.470 00:14:42.470 [2024-07-13 07:02:11.758248] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:42.470 07:02:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:42.470 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.728 [2024-07-13 07:02:11.999933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.988 Initializing NVMe Controllers 00:14:47.988 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:47.988 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:47.988 Initialization complete. Launching workers. 00:14:47.988 ======================================================== 00:14:47.988 Latency(us) 00:14:47.988 Device Information : IOPS MiB/s Average min max 00:14:47.988 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31931.50 124.73 4008.00 1207.88 7661.41 00:14:47.988 ======================================================== 00:14:47.988 Total : 31931.50 124.73 4008.00 1207.88 7661.41 00:14:47.988 00:14:47.988 [2024-07-13 07:02:17.022105] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.988 07:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:47.988 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.988 [2024-07-13 07:02:17.224635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:53.248 [2024-07-13 07:02:22.362022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:53.248 Initializing NVMe Controllers 00:14:53.248 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:53.248 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:53.248 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:53.248 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:53.248 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:53.248 Initialization complete. Launching workers. 00:14:53.248 Starting thread on core 2 00:14:53.248 Starting thread on core 3 00:14:53.248 Starting thread on core 1 00:14:53.248 07:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:53.248 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.248 [2024-07-13 07:02:22.676365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.576 [2024-07-13 07:02:25.746156] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.576 Initializing NVMe Controllers 00:14:56.576 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.576 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.576 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:56.576 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:56.576 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:56.576 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:56.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:56.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:56.576 Initialization complete. Launching workers. 00:14:56.576 Starting thread on core 1 with urgent priority queue 00:14:56.576 Starting thread on core 2 with urgent priority queue 00:14:56.576 Starting thread on core 3 with urgent priority queue 00:14:56.576 Starting thread on core 0 with urgent priority queue 00:14:56.576 SPDK bdev Controller (SPDK2 ) core 0: 4967.33 IO/s 20.13 secs/100000 ios 00:14:56.576 SPDK bdev Controller (SPDK2 ) core 1: 5368.00 IO/s 18.63 secs/100000 ios 00:14:56.576 SPDK bdev Controller (SPDK2 ) core 2: 5115.00 IO/s 19.55 secs/100000 ios 00:14:56.576 SPDK bdev Controller (SPDK2 ) core 3: 5567.33 IO/s 17.96 secs/100000 ios 00:14:56.576 ======================================================== 00:14:56.576 00:14:56.576 07:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:56.576 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.833 [2024-07-13 07:02:26.034113] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.833 Initializing NVMe Controllers 00:14:56.833 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.834 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.834 Namespace ID: 1 size: 0GB 00:14:56.834 Initialization complete. 00:14:56.834 INFO: using host memory buffer for IO 00:14:56.834 Hello world! 00:14:56.834 [2024-07-13 07:02:26.046212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.834 07:02:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:56.834 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.092 [2024-07-13 07:02:26.332205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:58.026 Initializing NVMe Controllers 00:14:58.026 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:58.026 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:58.026 Initialization complete. Launching workers. 00:14:58.026 submit (in ns) avg, min, max = 7856.2, 3505.6, 4015512.2 00:14:58.026 complete (in ns) avg, min, max = 23666.4, 2082.2, 4017446.7 00:14:58.026 00:14:58.026 Submit histogram 00:14:58.026 ================ 00:14:58.026 Range in us Cumulative Count 00:14:58.026 3.484 - 3.508: 0.0375% ( 5) 00:14:58.026 3.508 - 3.532: 0.3447% ( 41) 00:14:58.026 3.532 - 3.556: 1.0190% ( 90) 00:14:58.026 3.556 - 3.579: 3.1021% ( 278) 00:14:58.026 3.579 - 3.603: 6.4064% ( 441) 00:14:58.026 3.603 - 3.627: 12.5506% ( 820) 00:14:58.026 3.627 - 3.650: 19.9610% ( 989) 00:14:58.026 3.650 - 3.674: 28.5928% ( 1152) 00:14:58.026 3.674 - 3.698: 36.9399% ( 1114) 00:14:58.026 3.698 - 3.721: 46.0587% ( 1217) 00:14:58.026 3.721 - 3.745: 53.0271% ( 930) 00:14:58.026 3.745 - 3.769: 58.5793% ( 741) 00:14:58.026 3.769 - 3.793: 62.4756% ( 520) 00:14:58.026 3.793 - 3.816: 66.4619% ( 532) 00:14:58.026 3.816 - 3.840: 69.8262% ( 449) 00:14:58.026 3.840 - 3.864: 73.6251% ( 507) 00:14:58.026 3.864 - 3.887: 76.9144% ( 439) 00:14:58.026 3.887 - 3.911: 80.2563% ( 446) 00:14:58.026 3.911 - 3.935: 83.5082% ( 434) 00:14:58.026 3.935 - 3.959: 85.9583% ( 327) 00:14:58.026 3.959 - 3.982: 88.1013% ( 286) 00:14:58.026 3.982 - 4.006: 89.7048% ( 214) 00:14:58.026 4.006 - 4.030: 91.1584% ( 194) 00:14:58.026 4.030 - 4.053: 92.2898% ( 151) 00:14:58.026 4.053 - 4.077: 93.2714% ( 131) 00:14:58.026 4.077 - 4.101: 94.0507% ( 104) 00:14:58.026 4.101 - 4.124: 94.7475% ( 93) 00:14:58.026 4.124 - 4.148: 95.1371% ( 52) 00:14:58.026 4.148 - 4.172: 95.4368% ( 40) 00:14:58.026 4.172 - 4.196: 95.6242% ( 25) 00:14:58.026 4.196 - 4.219: 95.7740% ( 20) 00:14:58.026 4.219 - 4.243: 95.8789% ( 14) 00:14:58.026 4.243 - 4.267: 95.9838% ( 14) 00:14:58.026 4.267 - 4.290: 96.1262% ( 19) 00:14:58.026 4.290 - 4.314: 96.2011% ( 10) 00:14:58.026 4.314 - 4.338: 96.3435% ( 19) 00:14:58.026 4.338 - 4.361: 96.4409% ( 13) 00:14:58.026 4.361 - 4.385: 96.5308% ( 12) 00:14:58.026 4.385 - 4.409: 96.6357% ( 14) 00:14:58.026 4.409 - 4.433: 96.7031% ( 9) 00:14:58.026 4.433 - 4.456: 96.7706% ( 9) 00:14:58.026 4.456 - 4.480: 96.8530% ( 11) 00:14:58.026 4.504 - 4.527: 96.9129% ( 8) 00:14:58.026 4.527 - 4.551: 96.9579% ( 6) 00:14:58.026 4.551 - 4.575: 96.9804% ( 3) 00:14:58.026 4.575 - 4.599: 96.9879% ( 1) 00:14:58.026 4.599 - 4.622: 97.0178% ( 4) 00:14:58.026 4.622 - 4.646: 97.0253% ( 1) 00:14:58.026 4.646 - 4.670: 97.0328% ( 1) 00:14:58.026 4.670 - 4.693: 97.0628% ( 4) 00:14:58.026 4.693 - 4.717: 97.0703% ( 1) 00:14:58.026 4.788 - 4.812: 97.0778% ( 1) 00:14:58.026 4.812 - 4.836: 97.0928% ( 2) 00:14:58.026 4.836 - 4.859: 97.1227% ( 4) 00:14:58.026 4.883 - 4.907: 97.1377% ( 2) 00:14:58.026 4.907 - 4.930: 97.1752% ( 5) 00:14:58.026 4.930 - 4.954: 97.2126% ( 5) 00:14:58.026 4.954 - 4.978: 97.2576% ( 6) 00:14:58.026 4.978 - 5.001: 97.3175% ( 8) 00:14:58.026 5.001 - 5.025: 97.3850% ( 9) 00:14:58.026 5.025 - 5.049: 97.4674% ( 11) 00:14:58.026 5.049 - 5.073: 97.5124% ( 6) 00:14:58.026 5.073 - 5.096: 97.5873% ( 10) 00:14:58.026 5.096 - 5.120: 97.6397% ( 7) 00:14:58.026 5.120 - 5.144: 97.6997% ( 8) 00:14:58.026 5.144 - 5.167: 97.7596% ( 8) 00:14:58.026 5.167 - 5.191: 97.7896% ( 4) 00:14:58.026 5.191 - 5.215: 97.7971% ( 1) 00:14:58.026 5.215 - 5.239: 97.8346% ( 5) 00:14:58.026 5.239 - 5.262: 97.8870% ( 7) 00:14:58.026 5.262 - 5.286: 97.9245% ( 5) 00:14:58.026 5.286 - 5.310: 97.9320% ( 1) 00:14:58.026 5.310 - 5.333: 97.9619% ( 4) 00:14:58.026 5.357 - 5.381: 98.0069% ( 6) 00:14:58.026 5.381 - 5.404: 98.0369% ( 4) 00:14:58.026 5.404 - 5.428: 98.0444% ( 1) 00:14:58.026 5.428 - 5.452: 98.0818% ( 5) 00:14:58.026 5.452 - 5.476: 98.0968% ( 2) 00:14:58.026 5.476 - 5.499: 98.1043% ( 1) 00:14:58.026 5.523 - 5.547: 98.1118% ( 1) 00:14:58.026 5.547 - 5.570: 98.1193% ( 1) 00:14:58.026 5.570 - 5.594: 98.1268% ( 1) 00:14:58.026 5.594 - 5.618: 98.1343% ( 1) 00:14:58.026 5.618 - 5.641: 98.1418% ( 1) 00:14:58.026 5.641 - 5.665: 98.1717% ( 4) 00:14:58.026 5.665 - 5.689: 98.1792% ( 1) 00:14:58.026 5.689 - 5.713: 98.1867% ( 1) 00:14:58.026 5.736 - 5.760: 98.2317% ( 6) 00:14:58.026 5.760 - 5.784: 98.2392% ( 1) 00:14:58.026 5.784 - 5.807: 98.2467% ( 1) 00:14:58.026 5.807 - 5.831: 98.2617% ( 2) 00:14:58.026 5.855 - 5.879: 98.2916% ( 4) 00:14:58.026 5.879 - 5.902: 98.3291% ( 5) 00:14:58.026 5.902 - 5.926: 98.3366% ( 1) 00:14:58.026 5.926 - 5.950: 98.3441% ( 1) 00:14:58.026 5.973 - 5.997: 98.3666% ( 3) 00:14:58.026 5.997 - 6.021: 98.3965% ( 4) 00:14:58.026 6.021 - 6.044: 98.4040% ( 1) 00:14:58.026 6.044 - 6.068: 98.4265% ( 3) 00:14:58.026 6.116 - 6.163: 98.4565% ( 4) 00:14:58.026 6.163 - 6.210: 98.4715% ( 2) 00:14:58.026 6.210 - 6.258: 98.4864% ( 2) 00:14:58.026 6.258 - 6.305: 98.5014% ( 2) 00:14:58.026 6.353 - 6.400: 98.5089% ( 1) 00:14:58.026 6.590 - 6.637: 98.5164% ( 1) 00:14:58.026 6.732 - 6.779: 98.5239% ( 1) 00:14:58.026 6.779 - 6.827: 98.5314% ( 1) 00:14:58.026 6.874 - 6.921: 98.5389% ( 1) 00:14:58.026 7.159 - 7.206: 98.5464% ( 1) 00:14:58.026 7.253 - 7.301: 98.5539% ( 1) 00:14:58.026 7.301 - 7.348: 98.5614% ( 1) 00:14:58.026 7.348 - 7.396: 98.5689% ( 1) 00:14:58.026 7.396 - 7.443: 98.5838% ( 2) 00:14:58.026 7.443 - 7.490: 98.5913% ( 1) 00:14:58.026 7.490 - 7.538: 98.5988% ( 1) 00:14:58.026 7.538 - 7.585: 98.6063% ( 1) 00:14:58.026 7.633 - 7.680: 98.6138% ( 1) 00:14:58.026 7.727 - 7.775: 98.6288% ( 2) 00:14:58.026 7.870 - 7.917: 98.6438% ( 2) 00:14:58.026 7.917 - 7.964: 98.6588% ( 2) 00:14:58.026 7.964 - 8.012: 98.6663% ( 1) 00:14:58.026 8.154 - 8.201: 98.6813% ( 2) 00:14:58.026 8.201 - 8.249: 98.6887% ( 1) 00:14:58.026 8.344 - 8.391: 98.7037% ( 2) 00:14:58.026 8.391 - 8.439: 98.7112% ( 1) 00:14:58.026 8.439 - 8.486: 98.7187% ( 1) 00:14:58.026 8.628 - 8.676: 98.7262% ( 1) 00:14:58.026 8.723 - 8.770: 98.7337% ( 1) 00:14:58.026 9.055 - 9.102: 98.7412% ( 1) 00:14:58.026 9.292 - 9.339: 98.7487% ( 1) 00:14:58.026 9.339 - 9.387: 98.7562% ( 1) 00:14:58.026 9.624 - 9.671: 98.7637% ( 1) 00:14:58.026 9.719 - 9.766: 98.7712% ( 1) 00:14:58.026 9.861 - 9.908: 98.7787% ( 1) 00:14:58.026 9.956 - 10.003: 98.7936% ( 2) 00:14:58.026 10.240 - 10.287: 98.8011% ( 1) 00:14:58.026 10.287 - 10.335: 98.8086% ( 1) 00:14:58.026 10.524 - 10.572: 98.8236% ( 2) 00:14:58.026 10.572 - 10.619: 98.8311% ( 1) 00:14:58.026 10.856 - 10.904: 98.8461% ( 2) 00:14:58.026 11.046 - 11.093: 98.8536% ( 1) 00:14:58.026 11.188 - 11.236: 98.8611% ( 1) 00:14:58.026 11.330 - 11.378: 98.8686% ( 1) 00:14:58.026 11.662 - 11.710: 98.8761% ( 1) 00:14:58.026 12.421 - 12.516: 98.8836% ( 1) 00:14:58.026 12.516 - 12.610: 98.8985% ( 2) 00:14:58.026 12.800 - 12.895: 98.9060% ( 1) 00:14:58.026 12.895 - 12.990: 98.9135% ( 1) 00:14:58.026 12.990 - 13.084: 98.9210% ( 1) 00:14:58.026 13.274 - 13.369: 98.9285% ( 1) 00:14:58.026 13.369 - 13.464: 98.9360% ( 1) 00:14:58.026 14.033 - 14.127: 98.9435% ( 1) 00:14:58.026 14.127 - 14.222: 98.9510% ( 1) 00:14:58.026 14.222 - 14.317: 98.9585% ( 1) 00:14:58.026 14.317 - 14.412: 98.9660% ( 1) 00:14:58.026 14.507 - 14.601: 98.9810% ( 2) 00:14:58.026 14.696 - 14.791: 98.9885% ( 1) 00:14:58.026 14.791 - 14.886: 98.9960% ( 1) 00:14:58.026 17.067 - 17.161: 99.0184% ( 3) 00:14:58.026 17.161 - 17.256: 99.0334% ( 2) 00:14:58.026 17.256 - 17.351: 99.0559% ( 3) 00:14:58.026 17.446 - 17.541: 99.0634% ( 1) 00:14:58.026 17.541 - 17.636: 99.0859% ( 3) 00:14:58.026 17.636 - 17.730: 99.1383% ( 7) 00:14:58.026 17.730 - 17.825: 99.1758% ( 5) 00:14:58.026 17.825 - 17.920: 99.2432% ( 9) 00:14:58.026 17.920 - 18.015: 99.2882% ( 6) 00:14:58.026 18.015 - 18.110: 99.3406% ( 7) 00:14:58.026 18.110 - 18.204: 99.3781% ( 5) 00:14:58.026 18.204 - 18.299: 99.4755% ( 13) 00:14:58.026 18.299 - 18.394: 99.5279% ( 7) 00:14:58.026 18.394 - 18.489: 99.5654% ( 5) 00:14:58.026 18.489 - 18.584: 99.6254% ( 8) 00:14:58.026 18.584 - 18.679: 99.6703% ( 6) 00:14:58.026 18.679 - 18.773: 99.7153% ( 6) 00:14:58.026 18.773 - 18.868: 99.7303% ( 2) 00:14:58.026 18.868 - 18.963: 99.7377% ( 1) 00:14:58.026 19.058 - 19.153: 99.7527% ( 2) 00:14:58.026 19.153 - 19.247: 99.7902% ( 5) 00:14:58.027 19.247 - 19.342: 99.8277% ( 5) 00:14:58.027 19.627 - 19.721: 99.8352% ( 1) 00:14:58.027 19.721 - 19.816: 99.8426% ( 1) 00:14:58.027 20.385 - 20.480: 99.8501% ( 1) 00:14:58.027 20.954 - 21.049: 99.8576% ( 1) 00:14:58.027 22.566 - 22.661: 99.8651% ( 1) 00:14:58.027 23.135 - 23.230: 99.8726% ( 1) 00:14:58.027 26.169 - 26.359: 99.8876% ( 2) 00:14:58.027 27.876 - 28.065: 99.8951% ( 1) 00:14:58.027 29.013 - 29.203: 99.9026% ( 1) 00:14:58.027 3980.705 - 4004.978: 99.9700% ( 9) 00:14:58.027 4004.978 - 4029.250: 100.0000% ( 4) 00:14:58.027 00:14:58.027 Complete histogram 00:14:58.027 ================== 00:14:58.027 Range in us Cumulative Count 00:14:58.027 2.074 - 2.086: 0.3896% ( 52) 00:14:58.027 2.086 - 2.098: 26.1202% ( 3434) 00:14:58.027 2.098 - 2.110: 41.8178% ( 2095) 00:14:58.027 2.110 - 2.121: 44.6651% ( 380) 00:14:58.027 2.121 - 2.133: 54.9303% ( 1370) 00:14:58.027 2.133 - 2.145: 58.8266% ( 520) 00:14:58.027 2.145 - 2.157: 61.0895% ( 302) 00:14:58.027 2.157 - 2.169: 74.8764% ( 1840) 00:14:58.027 2.169 - 2.181: 79.1098% ( 565) 00:14:58.027 2.181 - 2.193: 80.8932% ( 238) 00:14:58.027 2.193 - 2.204: 85.4338% ( 606) 00:14:58.027 2.204 - 2.216: 86.9024% ( 196) 00:14:58.027 2.216 - 2.228: 87.7791% ( 117) 00:14:58.027 2.228 - 2.240: 89.8172% ( 272) 00:14:58.027 2.240 - 2.252: 91.8927% ( 277) 00:14:58.027 2.252 - 2.264: 93.2639% ( 183) 00:14:58.027 2.264 - 2.276: 94.0132% ( 100) 00:14:58.027 2.276 - 2.287: 94.4178% ( 54) 00:14:58.027 2.287 - 2.299: 94.6501% ( 31) 00:14:58.027 2.299 - 2.311: 94.8074% ( 21) 00:14:58.027 2.311 - 2.323: 95.0322% ( 30) 00:14:58.027 2.323 - 2.335: 95.2720% ( 32) 00:14:58.027 2.335 - 2.347: 95.3694% ( 13) 00:14:58.027 2.347 - 2.359: 95.4144% ( 6) 00:14:58.027 2.359 - 2.370: 95.5642% ( 20) 00:14:58.027 2.370 - 2.382: 95.7515% ( 25) 00:14:58.027 2.382 - 2.394: 95.9538% ( 27) 00:14:58.027 2.394 - 2.406: 96.2386% ( 38) 00:14:58.027 2.406 - 2.418: 96.5233% ( 38) 00:14:58.027 2.418 - 2.430: 96.6881% ( 22) 00:14:58.027 2.430 - 2.441: 96.8905% ( 27) 00:14:58.027 2.441 - 2.453: 96.9954% ( 14) 00:14:58.027 2.453 - 2.465: 97.0703% ( 10) 00:14:58.027 2.465 - 2.477: 97.1302% ( 8) 00:14:58.027 2.477 - 2.489: 97.1827% ( 7) 00:14:58.027 2.489 - 2.501: 97.2651% ( 11) 00:14:58.027 2.501 - 2.513: 97.2876% ( 3) 00:14:58.027 2.513 - 2.524: 97.3250% ( 5) 00:14:58.027 2.524 - 2.536: 97.3925% ( 9) 00:14:58.027 2.536 - 2.548: 97.4374% ( 6) 00:14:58.027 2.548 - 2.560: 97.4749% ( 5) 00:14:58.027 2.560 - 2.572: 97.5348% ( 8) 00:14:58.027 2.572 - 2.584: 97.6098% ( 10) 00:14:58.027 2.584 - 2.596: 97.6772% ( 9) 00:14:58.027 2.596 - 2.607: 97.7371% ( 8) 00:14:58.027 2.607 - 2.619: 97.7596% ( 3) 00:14:58.027 2.619 - 2.631: 97.7971% ( 5) 00:14:58.027 2.631 - 2.643: 97.8421% ( 6) 00:14:58.027 2.643 - 2.655: 97.8645% ( 3) 00:14:58.027 2.655 - 2.667: 97.8945% ( 4) 00:14:58.027 2.667 - 2.679: 97.9095% ( 2) 00:14:58.027 2.679 - 2.690: 97.9170% ( 1) 00:14:58.027 2.690 - 2.702: 97.9245% ( 1) 00:14:58.027 2.702 - 2.714: 97.9320% ( 1) 00:14:58.027 2.714 - 2.726: 97.9544% ( 3) 00:14:58.027 2.726 - 2.738: 97.9694% ( 2) 00:14:58.027 2.750 - 2.761: 97.9919% ( 3) 00:14:58.027 2.761 - 2.773: 97.9994% ( 1) 00:14:58.027 2.773 - 2.785: 98.0219% ( 3) 00:14:58.027 2.785 - 2.797: 98.0369% ( 2) 00:14:58.027 2.797 - 2.809: 98.0444% ( 1) 00:14:58.027 2.809 - 2.821: 98.0593% ( 2) 00:14:58.027 2.821 - 2.833: 98.0743% ( 2) 00:14:58.027 2.844 - 2.856: 98.0893% ( 2) 00:14:58.027 2.856 - 2.868: 98.1043% ( 2) 00:14:58.027 2.892 - 2.904: 98.1268% ( 3) 00:14:58.027 2.904 - 2.916: 98.1418% ( 2) 00:14:58.027 2.927 - 2.939: 98.1493% ( 1) 00:14:58.027 2.939 - 2.951: 98.1568% ( 1) 00:14:58.027 2.951 - 2.963: 98.1717% ( 2) 00:14:58.027 2.963 - 2.975: 98.1792% ( 1) 00:14:58.027 2.975 - 2.987: 98.1867% ( 1) 00:14:58.027 2.987 - 2.999: 98.1942% ( 1) 00:14:58.027 2.999 - 3.010: 98.2017% ( 1) 00:14:58.027 3.010 - 3.022: 98.2167% ( 2) 00:14:58.027 3.022 - 3.034: 98.2392% ( 3) 00:14:58.027 3.034 - 3.058: 98.2467% ( 1) 00:14:58.027 3.058 - 3.081: 98.2916% ( 6) 00:14:58.027 3.081 - 3.105: 98.3066% ( 2) 00:14:58.027 3.105 - 3.129: 98.3441% ( 5) 00:14:58.027 3.129 - 3.153: 98.3516% ( 1) 00:14:58.027 3.153 - 3.176: 98.3740% ( 3) 00:14:58.027 3.176 - 3.200: 98.3815% ( 1) 00:14:58.027 3.200 - 3.224: 98.3965% ( 2) 00:14:58.027 3.224 - 3.247: 98.4190% ( 3) 00:14:58.027 3.247 - 3.271: 98.4340% ( 2) 00:14:58.027 3.271 - 3.295: 98.4490% ( 2) 00:14:58.027 3.295 - 3.319: 98.4565% ( 1) 00:14:58.027 3.319 - 3.342: 98.4715% ( 2) 00:14:58.027 3.342 - 3.366: 98.4939% ( 3) 00:14:58.027 3.366 - 3.390: 98.5089% ( 2) 00:14:58.027 3.390 - 3.413: 98.5164% ( 1) 00:14:58.027 3.413 - 3.437: 98.5239% ( 1) 00:14:58.027 3.437 - 3.461: 98.5464% ( 3) 00:14:58.027 3.461 - 3.484: 98.5614% ( 2) 00:14:58.027 3.579 - 3.603: 98.5838% ( 3) 00:14:58.027 3.603 - 3.627: 98.5988% ( 2) 00:14:58.027 3.627 - 3.650: 98.6063% ( 1) 00:14:58.027 3.650 - 3.674: 98.6213% ( 2) 00:14:58.027 3.674 - 3.698: 98.6288% ( 1) 00:14:58.027 3.698 - 3.721: 98.6438% ( 2) 00:14:58.027 3.721 - 3.745: 98.6588% ( 2) 00:14:58.027 3.745 - 3.769: 98.6663% ( 1) 00:14:58.027 3.816 - 3.840: 98.6738% ( 1) 00:14:58.027 3.840 - 3.864: 98.6962% ( 3) 00:14:58.027 3.864 - 3.887: 98.7112% ( 2) 00:14:58.027 3.959 - 3.982: 98.7187% ( 1) 00:14:58.027 3.982 - 4.006: 98.7262% ( 1) 00:14:58.027 4.006 - 4.030: 98.7412% ( 2) 00:14:58.027 4.030 - 4.053: 98.7487% ( 1) 00:14:58.027 4.053 - 4.077: 98.7562% ( 1) 00:14:58.027 4.101 - 4.124: 98.7712% ( 2) 00:14:58.027 4.196 - 4.219: 98.7787% ( 1) 00:14:58.027 4.219 - 4.243: 98.7862% ( 1) 00:14:58.027 4.314 - 4.338: 98.7936% ( 1) 00:14:58.027 4.338 - 4.361: 98.8011% ( 1) 00:14:58.027 4.433 - 4.456: 98.8086% ( 1) 00:14:58.027 4.670 - 4.693: 98.8161% ( 1) 00:14:58.027 4.693 - 4.717: 98.8236% ( 1) 00:14:58.027 4.859 - 4.883: 98.8311% ( 1) 00:14:58.027 4.883 - 4.907: 98.8386% ( 1) 00:14:58.027 4.954 - 4.978: 98.8461% ( 1) 00:14:58.027 5.120 - 5.144: 98.8611% ( 2) 00:14:58.027 5.144 - 5.167: 98.8686% ( 1) 00:14:58.027 5.239 - 5.262: 98.8761% ( 1) 00:14:58.027 5.262 - 5.286: 98.8836% ( 1) 00:14:58.027 5.404 - 5.428: 98.8911% ( 1) 00:14:58.027 5.428 - 5.452: 98.8985% ( 1) 00:14:58.027 5.665 - 5.689: 98.9060% ( 1) 00:14:58.027 5.689 - 5.713: 98.9135% ( 1) 00:14:58.027 5.713 - 5.736: 98.9210% ( 1) 00:14:58.027 5.807 - 5.831: 98.9285% ( 1) 00:14:58.027 5.879 - 5.902: 98.9360% ( 1) 00:14:58.027 5.902 - 5.926: 98.9435% ( 1) 00:14:58.028 5.950 - 5.973: 98.9510% ( 1) 00:14:58.028 6.116 - 6.163: 98.9585% ( 1) 00:14:58.028 6.779 - 6.827: 98.9660% ( 1) 00:14:58.028 6.969 - 7.016: 98.9810% ( 2) 00:14:58.028 7.111 - 7.159: 98.9885% ( 1) 00:14:58.028 7.159 - 7.206: 98.9960% ( 1) 00:14:58.028 7.348 - 7.396: 99.0034% ( 1) 00:14:58.028 7.822 - 7.870: 99.0109% ( 1) 00:14:58.028 7.964 - 8.012: 99.0184% ( 1) 00:14:58.028 8.201 - 8.249: 99.0259% ( 1) 00:14:58.028 8.723 - 8.770: 99.0334% ( 1) 00:14:58.028 9.434 - 9.481: 99.0409% ( 1) 00:14:58.028 15.644 - 15.739: 99.0484% ( 1) 00:14:58.028 15.834 - 15.929: 99.0634%[2024-07-13 07:02:27.426618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:58.028 ( 2) 00:14:58.028 15.929 - 16.024: 99.0784% ( 2) 00:14:58.028 16.024 - 16.119: 99.1009% ( 3) 00:14:58.028 16.119 - 16.213: 99.1083% ( 1) 00:14:58.028 16.213 - 16.308: 99.1308% ( 3) 00:14:58.028 16.308 - 16.403: 99.1608% ( 4) 00:14:58.028 16.403 - 16.498: 99.1908% ( 4) 00:14:58.028 16.498 - 16.593: 99.2282% ( 5) 00:14:58.028 16.593 - 16.687: 99.2507% ( 3) 00:14:58.028 16.687 - 16.782: 99.3331% ( 11) 00:14:58.028 16.782 - 16.877: 99.3781% ( 6) 00:14:58.028 16.877 - 16.972: 99.3856% ( 1) 00:14:58.028 16.972 - 17.067: 99.3931% ( 1) 00:14:58.028 17.067 - 17.161: 99.4081% ( 2) 00:14:58.028 17.256 - 17.351: 99.4156% ( 1) 00:14:58.028 18.110 - 18.204: 99.4230% ( 1) 00:14:58.028 18.394 - 18.489: 99.4305% ( 1) 00:14:58.028 18.489 - 18.584: 99.4380% ( 1) 00:14:58.028 18.679 - 18.773: 99.4455% ( 1) 00:14:58.028 19.153 - 19.247: 99.4530% ( 1) 00:14:58.028 19.721 - 19.816: 99.4605% ( 1) 00:14:58.028 3034.074 - 3046.210: 99.4680% ( 1) 00:14:58.028 3058.347 - 3070.483: 99.4755% ( 1) 00:14:58.028 3980.705 - 4004.978: 99.9176% ( 59) 00:14:58.028 4004.978 - 4029.250: 100.0000% ( 11) 00:14:58.028 00:14:58.028 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:58.028 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:58.028 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:58.028 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:58.028 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:58.286 [ 00:14:58.286 { 00:14:58.286 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:58.286 "subtype": "Discovery", 00:14:58.286 "listen_addresses": [], 00:14:58.286 "allow_any_host": true, 00:14:58.286 "hosts": [] 00:14:58.286 }, 00:14:58.286 { 00:14:58.286 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:58.286 "subtype": "NVMe", 00:14:58.286 "listen_addresses": [ 00:14:58.286 { 00:14:58.286 "trtype": "VFIOUSER", 00:14:58.286 "adrfam": "IPv4", 00:14:58.286 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:58.286 "trsvcid": "0" 00:14:58.286 } 00:14:58.286 ], 00:14:58.286 "allow_any_host": true, 00:14:58.286 "hosts": [], 00:14:58.286 "serial_number": "SPDK1", 00:14:58.286 "model_number": "SPDK bdev Controller", 00:14:58.286 "max_namespaces": 32, 00:14:58.286 "min_cntlid": 1, 00:14:58.286 "max_cntlid": 65519, 00:14:58.286 "namespaces": [ 00:14:58.286 { 00:14:58.286 "nsid": 1, 00:14:58.286 "bdev_name": "Malloc1", 00:14:58.286 "name": "Malloc1", 00:14:58.286 "nguid": "F85661A8155F426B9D4B108DCC119F6F", 00:14:58.286 "uuid": "f85661a8-155f-426b-9d4b-108dcc119f6f" 00:14:58.286 }, 00:14:58.286 { 00:14:58.286 "nsid": 2, 00:14:58.286 "bdev_name": "Malloc3", 00:14:58.286 "name": "Malloc3", 00:14:58.286 "nguid": "B413815FC3E3466D8B31B5AA26BD1298", 00:14:58.286 "uuid": "b413815f-c3e3-466d-8b31-b5aa26bd1298" 00:14:58.286 } 00:14:58.286 ] 00:14:58.286 }, 00:14:58.286 { 00:14:58.286 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:58.286 "subtype": "NVMe", 00:14:58.286 "listen_addresses": [ 00:14:58.286 { 00:14:58.286 "trtype": "VFIOUSER", 00:14:58.286 "adrfam": "IPv4", 00:14:58.286 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:58.286 "trsvcid": "0" 00:14:58.286 } 00:14:58.286 ], 00:14:58.286 "allow_any_host": true, 00:14:58.286 "hosts": [], 00:14:58.286 "serial_number": "SPDK2", 00:14:58.286 "model_number": "SPDK bdev Controller", 00:14:58.286 "max_namespaces": 32, 00:14:58.286 "min_cntlid": 1, 00:14:58.286 "max_cntlid": 65519, 00:14:58.286 "namespaces": [ 00:14:58.286 { 00:14:58.286 "nsid": 1, 00:14:58.286 "bdev_name": "Malloc2", 00:14:58.286 "name": "Malloc2", 00:14:58.286 "nguid": "80BD3AF6611645BFAE5F214C3FC8CF43", 00:14:58.286 "uuid": "80bd3af6-6116-45bf-ae5f-214c3fc8cf43" 00:14:58.286 } 00:14:58.286 ] 00:14:58.286 } 00:14:58.286 ] 00:14:58.286 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:58.286 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1480422 00:14:58.286 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:58.286 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:58.286 07:02:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:58.286 07:02:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:58.286 07:02:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:58.286 07:02:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:58.286 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:58.286 07:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:58.544 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.544 [2024-07-13 07:02:27.887341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:58.802 Malloc4 00:14:58.802 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:58.802 [2024-07-13 07:02:28.228836] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:58.802 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:59.060 Asynchronous Event Request test 00:14:59.060 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.060 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.060 Registering asynchronous event callbacks... 00:14:59.060 Starting namespace attribute notice tests for all controllers... 00:14:59.060 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:59.060 aer_cb - Changed Namespace 00:14:59.060 Cleaning up... 00:14:59.060 [ 00:14:59.060 { 00:14:59.060 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:59.060 "subtype": "Discovery", 00:14:59.060 "listen_addresses": [], 00:14:59.060 "allow_any_host": true, 00:14:59.060 "hosts": [] 00:14:59.060 }, 00:14:59.060 { 00:14:59.060 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:59.060 "subtype": "NVMe", 00:14:59.060 "listen_addresses": [ 00:14:59.060 { 00:14:59.060 "trtype": "VFIOUSER", 00:14:59.060 "adrfam": "IPv4", 00:14:59.060 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:59.060 "trsvcid": "0" 00:14:59.060 } 00:14:59.060 ], 00:14:59.060 "allow_any_host": true, 00:14:59.060 "hosts": [], 00:14:59.060 "serial_number": "SPDK1", 00:14:59.060 "model_number": "SPDK bdev Controller", 00:14:59.060 "max_namespaces": 32, 00:14:59.060 "min_cntlid": 1, 00:14:59.060 "max_cntlid": 65519, 00:14:59.060 "namespaces": [ 00:14:59.060 { 00:14:59.060 "nsid": 1, 00:14:59.060 "bdev_name": "Malloc1", 00:14:59.060 "name": "Malloc1", 00:14:59.060 "nguid": "F85661A8155F426B9D4B108DCC119F6F", 00:14:59.060 "uuid": "f85661a8-155f-426b-9d4b-108dcc119f6f" 00:14:59.060 }, 00:14:59.060 { 00:14:59.060 "nsid": 2, 00:14:59.060 "bdev_name": "Malloc3", 00:14:59.060 "name": "Malloc3", 00:14:59.060 "nguid": "B413815FC3E3466D8B31B5AA26BD1298", 00:14:59.060 "uuid": "b413815f-c3e3-466d-8b31-b5aa26bd1298" 00:14:59.060 } 00:14:59.060 ] 00:14:59.060 }, 00:14:59.060 { 00:14:59.060 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:59.060 "subtype": "NVMe", 00:14:59.060 "listen_addresses": [ 00:14:59.060 { 00:14:59.060 "trtype": "VFIOUSER", 00:14:59.060 "adrfam": "IPv4", 00:14:59.061 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:59.061 "trsvcid": "0" 00:14:59.061 } 00:14:59.061 ], 00:14:59.061 "allow_any_host": true, 00:14:59.061 "hosts": [], 00:14:59.061 "serial_number": "SPDK2", 00:14:59.061 "model_number": "SPDK bdev Controller", 00:14:59.061 "max_namespaces": 32, 00:14:59.061 "min_cntlid": 1, 00:14:59.061 "max_cntlid": 65519, 00:14:59.061 "namespaces": [ 00:14:59.061 { 00:14:59.061 "nsid": 1, 00:14:59.061 "bdev_name": "Malloc2", 00:14:59.061 "name": "Malloc2", 00:14:59.061 "nguid": "80BD3AF6611645BFAE5F214C3FC8CF43", 00:14:59.061 "uuid": "80bd3af6-6116-45bf-ae5f-214c3fc8cf43" 00:14:59.061 }, 00:14:59.061 { 00:14:59.061 "nsid": 2, 00:14:59.061 "bdev_name": "Malloc4", 00:14:59.061 "name": "Malloc4", 00:14:59.061 "nguid": "5BCCE181628C411E85EE3F9AD0332AFB", 00:14:59.061 "uuid": "5bcce181-628c-411e-85ee-3f9ad0332afb" 00:14:59.061 } 00:14:59.061 ] 00:14:59.061 } 00:14:59.061 ] 00:14:59.061 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1480422 00:14:59.061 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:59.061 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1474834 00:14:59.061 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1474834 ']' 00:14:59.061 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1474834 00:14:59.061 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:59.061 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.061 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1474834 00:14:59.319 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.319 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.319 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1474834' 00:14:59.319 killing process with pid 1474834 00:14:59.319 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1474834 00:14:59.319 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1474834 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1480568 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1480568' 00:14:59.578 Process pid: 1480568 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1480568 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1480568 ']' 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.578 07:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:59.578 [2024-07-13 07:02:28.883008] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:59.578 [2024-07-13 07:02:28.884004] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:59.578 [2024-07-13 07:02:28.884066] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.578 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.578 [2024-07-13 07:02:28.917648] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:59.578 [2024-07-13 07:02:28.944327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.578 [2024-07-13 07:02:29.028722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.578 [2024-07-13 07:02:29.028775] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.578 [2024-07-13 07:02:29.028799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.578 [2024-07-13 07:02:29.028811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.578 [2024-07-13 07:02:29.028821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.578 [2024-07-13 07:02:29.028904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.578 [2024-07-13 07:02:29.028958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.578 [2024-07-13 07:02:29.028930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.578 [2024-07-13 07:02:29.028955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.837 [2024-07-13 07:02:29.124583] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:59.837 [2024-07-13 07:02:29.124766] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:59.837 [2024-07-13 07:02:29.125068] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:59.837 [2024-07-13 07:02:29.125682] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:59.837 [2024-07-13 07:02:29.125928] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:59.837 07:02:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.837 07:02:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:59.837 07:02:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:00.768 07:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:01.025 07:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:01.025 07:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:01.025 07:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:01.025 07:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:01.026 07:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:01.283 Malloc1 00:15:01.283 07:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:01.542 07:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:01.799 07:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:02.056 07:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:02.056 07:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:02.056 07:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:02.315 Malloc2 00:15:02.315 07:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:02.572 07:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:02.829 07:02:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1480568 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1480568 ']' 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1480568 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1480568 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1480568' 00:15:03.086 killing process with pid 1480568 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1480568 00:15:03.086 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1480568 00:15:03.344 07:02:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:03.344 07:02:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:03.344 00:15:03.344 real 0m52.593s 00:15:03.344 user 3m27.803s 00:15:03.344 sys 0m4.274s 00:15:03.344 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.344 07:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:03.344 ************************************ 00:15:03.344 END TEST nvmf_vfio_user 00:15:03.344 ************************************ 00:15:03.344 07:02:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:03.344 07:02:32 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:03.344 07:02:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:03.344 07:02:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.344 07:02:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.344 ************************************ 00:15:03.344 START TEST nvmf_vfio_user_nvme_compliance 00:15:03.344 ************************************ 00:15:03.344 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:03.602 * Looking for test storage... 00:15:03.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1481042 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1481042' 00:15:03.602 Process pid: 1481042 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1481042 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1481042 ']' 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.602 07:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:03.602 [2024-07-13 07:02:32.895140] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:03.602 [2024-07-13 07:02:32.895234] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.602 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.602 [2024-07-13 07:02:32.927237] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:03.602 [2024-07-13 07:02:32.959615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:03.602 [2024-07-13 07:02:33.055148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.602 [2024-07-13 07:02:33.055213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.602 [2024-07-13 07:02:33.055239] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.602 [2024-07-13 07:02:33.055253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.602 [2024-07-13 07:02:33.055265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.602 [2024-07-13 07:02:33.055356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.602 [2024-07-13 07:02:33.055410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.602 [2024-07-13 07:02:33.055413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.860 07:02:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.860 07:02:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:15:03.860 07:02:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.792 malloc0 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.792 07:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:05.049 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.049 00:15:05.049 00:15:05.049 CUnit - A unit testing framework for C - Version 2.1-3 00:15:05.049 http://cunit.sourceforge.net/ 00:15:05.049 00:15:05.049 00:15:05.049 Suite: nvme_compliance 00:15:05.049 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-13 07:02:34.408444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.049 [2024-07-13 07:02:34.409907] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:05.049 [2024-07-13 07:02:34.409933] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:05.049 [2024-07-13 07:02:34.409947] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:05.049 [2024-07-13 07:02:34.411460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.049 passed 00:15:05.049 Test: admin_identify_ctrlr_verify_fused ...[2024-07-13 07:02:34.498086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.049 [2024-07-13 07:02:34.501104] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.307 passed 00:15:05.307 Test: admin_identify_ns ...[2024-07-13 07:02:34.588394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.307 [2024-07-13 07:02:34.647898] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:05.307 [2024-07-13 07:02:34.655886] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:05.307 [2024-07-13 07:02:34.676992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.307 passed 00:15:05.307 Test: admin_get_features_mandatory_features ...[2024-07-13 07:02:34.760599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.564 [2024-07-13 07:02:34.763606] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.564 passed 00:15:05.564 Test: admin_get_features_optional_features ...[2024-07-13 07:02:34.845132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.564 [2024-07-13 07:02:34.850166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.564 passed 00:15:05.564 Test: admin_set_features_number_of_queues ...[2024-07-13 07:02:34.931273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.821 [2024-07-13 07:02:35.040000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.821 passed 00:15:05.821 Test: admin_get_log_page_mandatory_logs ...[2024-07-13 07:02:35.120529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.821 [2024-07-13 07:02:35.123554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.821 passed 00:15:05.821 Test: admin_get_log_page_with_lpo ...[2024-07-13 07:02:35.208388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.821 [2024-07-13 07:02:35.275904] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:06.078 [2024-07-13 07:02:35.288952] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.078 passed 00:15:06.078 Test: fabric_property_get ...[2024-07-13 07:02:35.371578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.078 [2024-07-13 07:02:35.372839] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:06.078 [2024-07-13 07:02:35.377614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.078 passed 00:15:06.078 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-13 07:02:35.462200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.078 [2024-07-13 07:02:35.463481] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:06.078 [2024-07-13 07:02:35.465230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.078 passed 00:15:06.335 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-13 07:02:35.546781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.335 [2024-07-13 07:02:35.631875] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:06.335 [2024-07-13 07:02:35.647888] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:06.335 [2024-07-13 07:02:35.653006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.335 passed 00:15:06.335 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-13 07:02:35.736821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.335 [2024-07-13 07:02:35.738171] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:06.335 [2024-07-13 07:02:35.739844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.335 passed 00:15:06.593 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-13 07:02:35.824044] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.593 [2024-07-13 07:02:35.899877] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:06.593 [2024-07-13 07:02:35.923874] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:06.593 [2024-07-13 07:02:35.928999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.593 passed 00:15:06.593 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-13 07:02:36.012666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.593 [2024-07-13 07:02:36.013977] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:06.593 [2024-07-13 07:02:36.014026] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:06.593 [2024-07-13 07:02:36.015689] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.593 passed 00:15:06.853 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-13 07:02:36.097073] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.853 [2024-07-13 07:02:36.192905] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:06.853 [2024-07-13 07:02:36.200887] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:06.853 [2024-07-13 07:02:36.208893] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:06.853 [2024-07-13 07:02:36.216892] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:06.853 [2024-07-13 07:02:36.246005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.853 passed 00:15:07.110 Test: admin_create_io_sq_verify_pc ...[2024-07-13 07:02:36.329673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:07.110 [2024-07-13 07:02:36.345888] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:07.110 [2024-07-13 07:02:36.363963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:07.110 passed 00:15:07.110 Test: admin_create_io_qp_max_qps ...[2024-07-13 07:02:36.447519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.479 [2024-07-13 07:02:37.561882] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:08.736 [2024-07-13 07:02:37.942211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.736 passed 00:15:08.736 Test: admin_create_io_sq_shared_cq ...[2024-07-13 07:02:38.023439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.736 [2024-07-13 07:02:38.155873] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:08.993 [2024-07-13 07:02:38.192985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.993 passed 00:15:08.993 00:15:08.993 Run Summary: Type Total Ran Passed Failed Inactive 00:15:08.993 suites 1 1 n/a 0 0 00:15:08.993 tests 18 18 18 0 0 00:15:08.993 asserts 360 360 360 0 n/a 00:15:08.993 00:15:08.993 Elapsed time = 1.568 seconds 00:15:08.993 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1481042 00:15:08.993 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1481042 ']' 00:15:08.993 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1481042 00:15:08.993 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:15:08.993 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.993 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481042 00:15:08.993 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:08.994 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:08.994 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481042' 00:15:08.994 killing process with pid 1481042 00:15:08.994 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1481042 00:15:08.994 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1481042 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:09.252 00:15:09.252 real 0m5.763s 00:15:09.252 user 0m16.161s 00:15:09.252 sys 0m0.555s 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:09.252 ************************************ 00:15:09.252 END TEST nvmf_vfio_user_nvme_compliance 00:15:09.252 ************************************ 00:15:09.252 07:02:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:09.252 07:02:38 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:09.252 07:02:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:09.252 07:02:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.252 07:02:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.252 ************************************ 00:15:09.252 START TEST nvmf_vfio_user_fuzz 00:15:09.252 ************************************ 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:09.252 * Looking for test storage... 00:15:09.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1481762 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1481762' 00:15:09.252 Process pid: 1481762 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1481762 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1481762 ']' 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.252 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:09.818 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.818 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:15:09.818 07:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:10.747 07:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:10.747 07:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.747 07:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:10.747 07:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.747 07:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:10.747 07:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:10.747 07:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.747 07:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:10.747 malloc0 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:10.747 07:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:42.802 Fuzzing completed. Shutting down the fuzz application 00:15:42.802 00:15:42.802 Dumping successful admin opcodes: 00:15:42.802 8, 9, 10, 24, 00:15:42.802 Dumping successful io opcodes: 00:15:42.802 0, 00:15:42.802 NS: 0x200003a1ef00 I/O qp, Total commands completed: 575920, total successful commands: 2216, random_seed: 256667200 00:15:42.802 NS: 0x200003a1ef00 admin qp, Total commands completed: 73328, total successful commands: 577, random_seed: 473691200 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1481762 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1481762 ']' 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1481762 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481762 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481762' 00:15:42.802 killing process with pid 1481762 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1481762 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1481762 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:42.802 00:15:42.802 real 0m32.215s 00:15:42.802 user 0m31.332s 00:15:42.802 sys 0m28.682s 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:42.802 07:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:42.802 ************************************ 00:15:42.802 END TEST nvmf_vfio_user_fuzz 00:15:42.802 ************************************ 00:15:42.802 07:03:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:42.802 07:03:10 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:42.802 07:03:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:42.802 07:03:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.802 07:03:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:42.802 ************************************ 00:15:42.802 START TEST nvmf_host_management 00:15:42.802 ************************************ 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:42.802 * Looking for test storage... 00:15:42.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.802 07:03:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:42.803 07:03:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.736 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:43.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:43.737 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:43.737 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:43.737 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:43.737 07:03:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:43.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:15:43.737 00:15:43.737 --- 10.0.0.2 ping statistics --- 00:15:43.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.737 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:15:43.737 00:15:43.737 --- 10.0.0.1 ping statistics --- 00:15:43.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.737 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1487817 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1487817 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1487817 ']' 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.737 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:43.737 [2024-07-13 07:03:13.110781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:43.737 [2024-07-13 07:03:13.110889] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.737 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.737 [2024-07-13 07:03:13.150104] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:43.737 [2024-07-13 07:03:13.180355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.995 [2024-07-13 07:03:13.276355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.995 [2024-07-13 07:03:13.276433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.995 [2024-07-13 07:03:13.276447] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.995 [2024-07-13 07:03:13.276458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.995 [2024-07-13 07:03:13.276468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.995 [2024-07-13 07:03:13.276551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.995 [2024-07-13 07:03:13.276617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.995 [2024-07-13 07:03:13.276680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:43.995 [2024-07-13 07:03:13.276682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:43.995 [2024-07-13 07:03:13.420755] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.995 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:44.253 Malloc0 00:15:44.253 [2024-07-13 07:03:13.481300] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1487961 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1487961 /var/tmp/bdevperf.sock 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1487961 ']' 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:44.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:44.253 { 00:15:44.253 "params": { 00:15:44.253 "name": "Nvme$subsystem", 00:15:44.253 "trtype": "$TEST_TRANSPORT", 00:15:44.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:44.253 "adrfam": "ipv4", 00:15:44.253 "trsvcid": "$NVMF_PORT", 00:15:44.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:44.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:44.253 "hdgst": ${hdgst:-false}, 00:15:44.253 "ddgst": ${ddgst:-false} 00:15:44.253 }, 00:15:44.253 "method": "bdev_nvme_attach_controller" 00:15:44.253 } 00:15:44.253 EOF 00:15:44.253 )") 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:44.253 07:03:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:44.253 "params": { 00:15:44.253 "name": "Nvme0", 00:15:44.253 "trtype": "tcp", 00:15:44.253 "traddr": "10.0.0.2", 00:15:44.253 "adrfam": "ipv4", 00:15:44.253 "trsvcid": "4420", 00:15:44.253 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:44.253 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:44.253 "hdgst": false, 00:15:44.253 "ddgst": false 00:15:44.253 }, 00:15:44.253 "method": "bdev_nvme_attach_controller" 00:15:44.253 }' 00:15:44.253 [2024-07-13 07:03:13.557584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:44.253 [2024-07-13 07:03:13.557674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487961 ] 00:15:44.253 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.253 [2024-07-13 07:03:13.589933] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:44.253 [2024-07-13 07:03:13.618825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.253 [2024-07-13 07:03:13.706410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.510 Running I/O for 10 seconds... 00:15:44.510 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.510 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.767 07:03:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:44.767 07:03:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.767 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:15:44.768 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:15:44.768 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.026 07:03:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:45.026 [2024-07-13 07:03:14.320876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.320945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.320975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.321969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.321984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.026 [2024-07-13 07:03:14.322609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.026 [2024-07-13 07:03:14.322625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.322657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.322687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.322719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.322751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.322782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.322818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.322851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.322906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.322944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.322975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.322990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.323007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.323022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.323039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.323054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.323071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.027 [2024-07-13 07:03:14.323086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.027 [2024-07-13 07:03:14.323102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccfe10 is same with the state(5) to be set 00:15:45.027 [2024-07-13 07:03:14.323197] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ccfe10 was disconnected and freed. reset controller. 00:15:45.027 [2024-07-13 07:03:14.324424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:45.027 07:03:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.027 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:45.027 07:03:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.027 07:03:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:45.027 task offset: 81664 on job bdev=Nvme0n1 fails 00:15:45.027 00:15:45.027 Latency(us) 00:15:45.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.027 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:45.027 Job: Nvme0n1 ended in about 0.40 seconds with error 00:15:45.027 Verification LBA range: start 0x0 length 0x400 00:15:45.027 Nvme0n1 : 0.40 1452.42 90.78 161.38 0.00 38529.97 2864.17 34952.53 00:15:45.027 =================================================================================================================== 00:15:45.027 Total : 1452.42 90.78 161.38 0.00 38529.97 2864.17 34952.53 00:15:45.027 [2024-07-13 07:03:14.326354] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:45.027 [2024-07-13 07:03:14.326385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18beb50 (9): Bad file descriptor 00:15:45.027 07:03:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.027 07:03:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:45.027 [2024-07-13 07:03:14.334849] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1487961 00:15:45.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1487961) - No such process 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:45.957 { 00:15:45.957 "params": { 00:15:45.957 "name": "Nvme$subsystem", 00:15:45.957 "trtype": "$TEST_TRANSPORT", 00:15:45.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:45.957 "adrfam": "ipv4", 00:15:45.957 "trsvcid": "$NVMF_PORT", 00:15:45.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:45.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:45.957 "hdgst": ${hdgst:-false}, 00:15:45.957 "ddgst": ${ddgst:-false} 00:15:45.957 }, 00:15:45.957 "method": "bdev_nvme_attach_controller" 00:15:45.957 } 00:15:45.957 EOF 00:15:45.957 )") 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:45.957 07:03:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:45.957 "params": { 00:15:45.957 "name": "Nvme0", 00:15:45.957 "trtype": "tcp", 00:15:45.957 "traddr": "10.0.0.2", 00:15:45.957 "adrfam": "ipv4", 00:15:45.957 "trsvcid": "4420", 00:15:45.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:45.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:45.957 "hdgst": false, 00:15:45.957 "ddgst": false 00:15:45.957 }, 00:15:45.958 "method": "bdev_nvme_attach_controller" 00:15:45.958 }' 00:15:45.958 [2024-07-13 07:03:15.376688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:45.958 [2024-07-13 07:03:15.376781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488145 ] 00:15:45.958 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.958 [2024-07-13 07:03:15.408798] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:46.215 [2024-07-13 07:03:15.438509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.215 [2024-07-13 07:03:15.523413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.471 Running I/O for 1 seconds... 00:15:47.435 00:15:47.435 Latency(us) 00:15:47.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.435 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:47.435 Verification LBA range: start 0x0 length 0x400 00:15:47.435 Nvme0n1 : 1.03 1559.61 97.48 0.00 0.00 40393.28 8058.50 33981.63 00:15:47.435 =================================================================================================================== 00:15:47.435 Total : 1559.61 97.48 0.00 0.00 40393.28 8058.50 33981.63 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.693 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.693 rmmod nvme_tcp 00:15:47.693 rmmod nvme_fabrics 00:15:47.693 rmmod nvme_keyring 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1487817 ']' 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1487817 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1487817 ']' 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1487817 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1487817 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1487817' 00:15:47.951 killing process with pid 1487817 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1487817 00:15:47.951 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1487817 00:15:48.210 [2024-07-13 07:03:17.422015] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:48.210 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.210 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.210 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.210 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.210 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.210 07:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.210 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.210 07:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.107 07:03:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.107 07:03:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:50.107 00:15:50.107 real 0m8.647s 00:15:50.107 user 0m19.492s 00:15:50.107 sys 0m2.603s 00:15:50.107 07:03:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.107 07:03:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:50.107 ************************************ 00:15:50.107 END TEST nvmf_host_management 00:15:50.107 ************************************ 00:15:50.107 07:03:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:50.107 07:03:19 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:50.107 07:03:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:50.107 07:03:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.107 07:03:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:50.107 ************************************ 00:15:50.107 START TEST nvmf_lvol 00:15:50.107 ************************************ 00:15:50.107 07:03:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:50.365 * Looking for test storage... 00:15:50.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.365 07:03:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:52.264 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:52.264 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:52.264 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:52.264 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.264 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:52.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:15:52.264 00:15:52.264 --- 10.0.0.2 ping statistics --- 00:15:52.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.264 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:15:52.265 00:15:52.265 --- 10.0.0.1 ping statistics --- 00:15:52.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.265 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1490341 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1490341 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1490341 ']' 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.265 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:52.522 [2024-07-13 07:03:21.724631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:52.522 [2024-07-13 07:03:21.724730] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.522 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.522 [2024-07-13 07:03:21.762872] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:52.522 [2024-07-13 07:03:21.794783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.522 [2024-07-13 07:03:21.884183] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.522 [2024-07-13 07:03:21.884249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.522 [2024-07-13 07:03:21.884266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.522 [2024-07-13 07:03:21.884279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.522 [2024-07-13 07:03:21.884291] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.522 [2024-07-13 07:03:21.884375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.522 [2024-07-13 07:03:21.884453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.522 [2024-07-13 07:03:21.884456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.778 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.778 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:15:52.778 07:03:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.778 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.778 07:03:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:52.778 07:03:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.778 07:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:53.035 [2024-07-13 07:03:22.249603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.035 07:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:53.291 07:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:53.291 07:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:53.548 07:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:53.548 07:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:53.846 07:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:54.102 07:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=19baa40d-39a6-45aa-9c52-8f61e5df4392 00:15:54.102 07:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 19baa40d-39a6-45aa-9c52-8f61e5df4392 lvol 20 00:15:54.357 07:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=dae33d0a-7f77-436b-9094-2642cc3a87c0 00:15:54.357 07:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:54.614 07:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dae33d0a-7f77-436b-9094-2642cc3a87c0 00:15:54.870 07:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:54.870 [2024-07-13 07:03:24.316577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.127 07:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:55.127 07:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1490645 00:15:55.127 07:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:55.127 07:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:55.383 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.316 07:03:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dae33d0a-7f77-436b-9094-2642cc3a87c0 MY_SNAPSHOT 00:15:56.573 07:03:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=efcb828a-e47a-41cb-bbcb-729361014d01 00:15:56.573 07:03:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dae33d0a-7f77-436b-9094-2642cc3a87c0 30 00:15:56.831 07:03:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone efcb828a-e47a-41cb-bbcb-729361014d01 MY_CLONE 00:15:57.089 07:03:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1bf5b019-a163-44f1-8684-6283652570d2 00:15:57.089 07:03:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1bf5b019-a163-44f1-8684-6283652570d2 00:15:57.653 07:03:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1490645 00:16:05.804 Initializing NVMe Controllers 00:16:05.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:05.804 Controller IO queue size 128, less than required. 00:16:05.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:05.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:05.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:05.804 Initialization complete. Launching workers. 00:16:05.804 ======================================================== 00:16:05.804 Latency(us) 00:16:05.804 Device Information : IOPS MiB/s Average min max 00:16:05.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10674.70 41.70 11998.52 1191.96 66131.73 00:16:05.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10623.30 41.50 12051.13 1925.61 59631.95 00:16:05.804 ======================================================== 00:16:05.804 Total : 21298.00 83.20 12024.76 1191.96 66131.73 00:16:05.804 00:16:05.804 07:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:06.063 07:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dae33d0a-7f77-436b-9094-2642cc3a87c0 00:16:06.321 07:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19baa40d-39a6-45aa-9c52-8f61e5df4392 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:06.579 rmmod nvme_tcp 00:16:06.579 rmmod nvme_fabrics 00:16:06.579 rmmod nvme_keyring 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1490341 ']' 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1490341 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1490341 ']' 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1490341 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1490341 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1490341' 00:16:06.579 killing process with pid 1490341 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1490341 00:16:06.579 07:03:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1490341 00:16:06.837 07:03:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:06.837 07:03:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:06.837 07:03:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:06.837 07:03:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.837 07:03:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.837 07:03:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.837 07:03:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.837 07:03:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.367 07:03:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:09.367 00:16:09.368 real 0m18.697s 00:16:09.368 user 1m3.852s 00:16:09.368 sys 0m5.638s 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:09.368 ************************************ 00:16:09.368 END TEST nvmf_lvol 00:16:09.368 ************************************ 00:16:09.368 07:03:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:09.368 07:03:38 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:09.368 07:03:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:09.368 07:03:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.368 07:03:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:09.368 ************************************ 00:16:09.368 START TEST nvmf_lvs_grow 00:16:09.368 ************************************ 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:09.368 * Looking for test storage... 00:16:09.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:09.368 07:03:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:11.270 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:11.270 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:11.270 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:11.270 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:11.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:16:11.270 00:16:11.270 --- 10.0.0.2 ping statistics --- 00:16:11.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.270 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:16:11.270 00:16:11.270 --- 10.0.0.1 ping statistics --- 00:16:11.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.270 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1493921 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1493921 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1493921 ']' 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.270 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:11.271 [2024-07-13 07:03:40.551762] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:11.271 [2024-07-13 07:03:40.551831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.271 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.271 [2024-07-13 07:03:40.593482] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:11.271 [2024-07-13 07:03:40.624040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.271 [2024-07-13 07:03:40.713408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.271 [2024-07-13 07:03:40.713473] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.271 [2024-07-13 07:03:40.713501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.271 [2024-07-13 07:03:40.713514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.271 [2024-07-13 07:03:40.713526] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.271 [2024-07-13 07:03:40.713565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.528 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:11.528 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:16:11.528 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:11.528 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:11.528 07:03:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:11.528 07:03:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.528 07:03:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:11.785 [2024-07-13 07:03:41.093736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:11.785 ************************************ 00:16:11.785 START TEST lvs_grow_clean 00:16:11.785 ************************************ 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:11.785 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:12.042 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:12.042 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:12.299 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bc14b74e-0333-447e-a6d2-03856df1674f 00:16:12.299 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:12.299 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:12.557 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:12.557 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:12.557 07:03:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bc14b74e-0333-447e-a6d2-03856df1674f lvol 150 00:16:12.815 07:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d5fefea3-d8ff-425b-8a72-cfbf0bc0a4dd 00:16:12.815 07:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:12.815 07:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:13.073 [2024-07-13 07:03:42.452234] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:13.073 [2024-07-13 07:03:42.452330] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:13.073 true 00:16:13.073 07:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:13.073 07:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:13.330 07:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:13.330 07:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:13.902 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d5fefea3-d8ff-425b-8a72-cfbf0bc0a4dd 00:16:13.902 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:14.160 [2024-07-13 07:03:43.555569] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.160 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.417 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1494350 00:16:14.417 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:14.417 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:14.417 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1494350 /var/tmp/bdevperf.sock 00:16:14.417 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1494350 ']' 00:16:14.417 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:14.417 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.417 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:14.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:14.417 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.417 07:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:14.417 [2024-07-13 07:03:43.860128] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:14.417 [2024-07-13 07:03:43.860214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494350 ] 00:16:14.675 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.675 [2024-07-13 07:03:43.893444] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:14.675 [2024-07-13 07:03:43.923935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.675 [2024-07-13 07:03:44.014706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.675 07:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.675 07:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:16:14.675 07:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:15.240 Nvme0n1 00:16:15.240 07:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:15.497 [ 00:16:15.497 { 00:16:15.497 "name": "Nvme0n1", 00:16:15.497 "aliases": [ 00:16:15.497 "d5fefea3-d8ff-425b-8a72-cfbf0bc0a4dd" 00:16:15.497 ], 00:16:15.497 "product_name": "NVMe disk", 00:16:15.497 "block_size": 4096, 00:16:15.497 "num_blocks": 38912, 00:16:15.497 "uuid": "d5fefea3-d8ff-425b-8a72-cfbf0bc0a4dd", 00:16:15.497 "assigned_rate_limits": { 00:16:15.497 "rw_ios_per_sec": 0, 00:16:15.497 "rw_mbytes_per_sec": 0, 00:16:15.498 "r_mbytes_per_sec": 0, 00:16:15.498 "w_mbytes_per_sec": 0 00:16:15.498 }, 00:16:15.498 "claimed": false, 00:16:15.498 "zoned": false, 00:16:15.498 "supported_io_types": { 00:16:15.498 "read": true, 00:16:15.498 "write": true, 00:16:15.498 "unmap": true, 00:16:15.498 "flush": true, 00:16:15.498 "reset": true, 00:16:15.498 "nvme_admin": true, 00:16:15.498 "nvme_io": true, 00:16:15.498 "nvme_io_md": false, 00:16:15.498 "write_zeroes": true, 00:16:15.498 "zcopy": false, 00:16:15.498 "get_zone_info": false, 00:16:15.498 "zone_management": false, 00:16:15.498 "zone_append": false, 00:16:15.498 "compare": true, 00:16:15.498 "compare_and_write": true, 00:16:15.498 "abort": true, 00:16:15.498 "seek_hole": false, 00:16:15.498 "seek_data": false, 00:16:15.498 "copy": true, 00:16:15.498 "nvme_iov_md": false 00:16:15.498 }, 00:16:15.498 "memory_domains": [ 00:16:15.498 { 00:16:15.498 "dma_device_id": "system", 00:16:15.498 "dma_device_type": 1 00:16:15.498 } 00:16:15.498 ], 00:16:15.498 "driver_specific": { 00:16:15.498 "nvme": [ 00:16:15.498 { 00:16:15.498 "trid": { 00:16:15.498 "trtype": "TCP", 00:16:15.498 "adrfam": "IPv4", 00:16:15.498 "traddr": "10.0.0.2", 00:16:15.498 "trsvcid": "4420", 00:16:15.498 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:15.498 }, 00:16:15.498 "ctrlr_data": { 00:16:15.498 "cntlid": 1, 00:16:15.498 "vendor_id": "0x8086", 00:16:15.498 "model_number": "SPDK bdev Controller", 00:16:15.498 "serial_number": "SPDK0", 00:16:15.498 "firmware_revision": "24.09", 00:16:15.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:15.498 "oacs": { 00:16:15.498 "security": 0, 00:16:15.498 "format": 0, 00:16:15.498 "firmware": 0, 00:16:15.498 "ns_manage": 0 00:16:15.498 }, 00:16:15.498 "multi_ctrlr": true, 00:16:15.498 "ana_reporting": false 00:16:15.498 }, 00:16:15.498 "vs": { 00:16:15.498 "nvme_version": "1.3" 00:16:15.498 }, 00:16:15.498 "ns_data": { 00:16:15.498 "id": 1, 00:16:15.498 "can_share": true 00:16:15.498 } 00:16:15.498 } 00:16:15.498 ], 00:16:15.498 "mp_policy": "active_passive" 00:16:15.498 } 00:16:15.498 } 00:16:15.498 ] 00:16:15.498 07:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1494488 00:16:15.498 07:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:15.498 07:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:15.756 Running I/O for 10 seconds... 00:16:16.690 Latency(us) 00:16:16.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:16.690 Nvme0n1 : 1.00 14316.00 55.92 0.00 0.00 0.00 0.00 0.00 00:16:16.690 =================================================================================================================== 00:16:16.690 Total : 14316.00 55.92 0.00 0.00 0.00 0.00 0.00 00:16:16.690 00:16:17.624 07:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:17.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:17.624 Nvme0n1 : 2.00 14627.00 57.14 0.00 0.00 0.00 0.00 0.00 00:16:17.624 =================================================================================================================== 00:16:17.624 Total : 14627.00 57.14 0.00 0.00 0.00 0.00 0.00 00:16:17.624 00:16:17.882 true 00:16:17.882 07:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:17.882 07:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:18.140 07:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:18.140 07:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:18.140 07:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1494488 00:16:18.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:18.709 Nvme0n1 : 3.00 14741.67 57.58 0.00 0.00 0.00 0.00 0.00 00:16:18.709 =================================================================================================================== 00:16:18.709 Total : 14741.67 57.58 0.00 0.00 0.00 0.00 0.00 00:16:18.709 00:16:19.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:19.644 Nvme0n1 : 4.00 14765.50 57.68 0.00 0.00 0.00 0.00 0.00 00:16:19.644 =================================================================================================================== 00:16:19.644 Total : 14765.50 57.68 0.00 0.00 0.00 0.00 0.00 00:16:19.644 00:16:20.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.578 Nvme0n1 : 5.00 14879.40 58.12 0.00 0.00 0.00 0.00 0.00 00:16:20.578 =================================================================================================================== 00:16:20.578 Total : 14879.40 58.12 0.00 0.00 0.00 0.00 0.00 00:16:20.578 00:16:21.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.953 Nvme0n1 : 6.00 14852.50 58.02 0.00 0.00 0.00 0.00 0.00 00:16:21.953 =================================================================================================================== 00:16:21.953 Total : 14852.50 58.02 0.00 0.00 0.00 0.00 0.00 00:16:21.953 00:16:22.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.888 Nvme0n1 : 7.00 14867.71 58.08 0.00 0.00 0.00 0.00 0.00 00:16:22.888 =================================================================================================================== 00:16:22.888 Total : 14867.71 58.08 0.00 0.00 0.00 0.00 0.00 00:16:22.888 00:16:23.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.821 Nvme0n1 : 8.00 14929.88 58.32 0.00 0.00 0.00 0.00 0.00 00:16:23.821 =================================================================================================================== 00:16:23.821 Total : 14929.88 58.32 0.00 0.00 0.00 0.00 0.00 00:16:23.821 00:16:24.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.752 Nvme0n1 : 9.00 14944.33 58.38 0.00 0.00 0.00 0.00 0.00 00:16:24.752 =================================================================================================================== 00:16:24.752 Total : 14944.33 58.38 0.00 0.00 0.00 0.00 0.00 00:16:24.752 00:16:25.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.719 Nvme0n1 : 10.00 15001.00 58.60 0.00 0.00 0.00 0.00 0.00 00:16:25.719 =================================================================================================================== 00:16:25.719 Total : 15001.00 58.60 0.00 0.00 0.00 0.00 0.00 00:16:25.719 00:16:25.719 00:16:25.719 Latency(us) 00:16:25.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.719 Nvme0n1 : 10.01 15002.66 58.60 0.00 0.00 8526.68 4903.06 16893.72 00:16:25.719 =================================================================================================================== 00:16:25.719 Total : 15002.66 58.60 0.00 0.00 8526.68 4903.06 16893.72 00:16:25.719 0 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1494350 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1494350 ']' 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1494350 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1494350 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1494350' 00:16:25.719 killing process with pid 1494350 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1494350 00:16:25.719 Received shutdown signal, test time was about 10.000000 seconds 00:16:25.719 00:16:25.719 Latency(us) 00:16:25.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.719 =================================================================================================================== 00:16:25.719 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.719 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1494350 00:16:25.977 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:26.233 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:26.490 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:26.490 07:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:26.748 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:26.748 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:26.748 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:27.006 [2024-07-13 07:03:56.369646] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:27.006 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:27.264 request: 00:16:27.264 { 00:16:27.264 "uuid": "bc14b74e-0333-447e-a6d2-03856df1674f", 00:16:27.264 "method": "bdev_lvol_get_lvstores", 00:16:27.264 "req_id": 1 00:16:27.264 } 00:16:27.264 Got JSON-RPC error response 00:16:27.264 response: 00:16:27.264 { 00:16:27.264 "code": -19, 00:16:27.264 "message": "No such device" 00:16:27.264 } 00:16:27.264 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:27.264 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:27.264 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:27.264 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:27.264 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:27.521 aio_bdev 00:16:27.521 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d5fefea3-d8ff-425b-8a72-cfbf0bc0a4dd 00:16:27.521 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=d5fefea3-d8ff-425b-8a72-cfbf0bc0a4dd 00:16:27.521 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:27.521 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:16:27.521 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:27.521 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:27.521 07:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:27.780 07:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d5fefea3-d8ff-425b-8a72-cfbf0bc0a4dd -t 2000 00:16:28.038 [ 00:16:28.038 { 00:16:28.038 "name": "d5fefea3-d8ff-425b-8a72-cfbf0bc0a4dd", 00:16:28.038 "aliases": [ 00:16:28.038 "lvs/lvol" 00:16:28.038 ], 00:16:28.038 "product_name": "Logical Volume", 00:16:28.038 "block_size": 4096, 00:16:28.038 "num_blocks": 38912, 00:16:28.038 "uuid": "d5fefea3-d8ff-425b-8a72-cfbf0bc0a4dd", 00:16:28.038 "assigned_rate_limits": { 00:16:28.038 "rw_ios_per_sec": 0, 00:16:28.038 "rw_mbytes_per_sec": 0, 00:16:28.038 "r_mbytes_per_sec": 0, 00:16:28.038 "w_mbytes_per_sec": 0 00:16:28.038 }, 00:16:28.038 "claimed": false, 00:16:28.038 "zoned": false, 00:16:28.038 "supported_io_types": { 00:16:28.038 "read": true, 00:16:28.038 "write": true, 00:16:28.038 "unmap": true, 00:16:28.038 "flush": false, 00:16:28.038 "reset": true, 00:16:28.038 "nvme_admin": false, 00:16:28.038 "nvme_io": false, 00:16:28.038 "nvme_io_md": false, 00:16:28.038 "write_zeroes": true, 00:16:28.038 "zcopy": false, 00:16:28.038 "get_zone_info": false, 00:16:28.038 "zone_management": false, 00:16:28.038 "zone_append": false, 00:16:28.038 "compare": false, 00:16:28.038 "compare_and_write": false, 00:16:28.038 "abort": false, 00:16:28.038 "seek_hole": true, 00:16:28.038 "seek_data": true, 00:16:28.038 "copy": false, 00:16:28.038 "nvme_iov_md": false 00:16:28.038 }, 00:16:28.038 "driver_specific": { 00:16:28.038 "lvol": { 00:16:28.038 "lvol_store_uuid": "bc14b74e-0333-447e-a6d2-03856df1674f", 00:16:28.038 "base_bdev": "aio_bdev", 00:16:28.038 "thin_provision": false, 00:16:28.038 "num_allocated_clusters": 38, 00:16:28.038 "snapshot": false, 00:16:28.038 "clone": false, 00:16:28.038 "esnap_clone": false 00:16:28.038 } 00:16:28.038 } 00:16:28.038 } 00:16:28.038 ] 00:16:28.038 07:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:16:28.038 07:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:28.038 07:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:28.296 07:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:28.296 07:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:28.296 07:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:28.554 07:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:28.554 07:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d5fefea3-d8ff-425b-8a72-cfbf0bc0a4dd 00:16:28.812 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc14b74e-0333-447e-a6d2-03856df1674f 00:16:29.070 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:29.328 00:16:29.328 real 0m17.575s 00:16:29.328 user 0m16.246s 00:16:29.328 sys 0m2.318s 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:29.328 ************************************ 00:16:29.328 END TEST lvs_grow_clean 00:16:29.328 ************************************ 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:29.328 ************************************ 00:16:29.328 START TEST lvs_grow_dirty 00:16:29.328 ************************************ 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:29.328 07:03:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:29.894 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:29.894 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:30.152 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=51798037-1731-4629-b2b5-3271963d6c72 00:16:30.152 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:30.152 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:30.411 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:30.411 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:30.411 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 51798037-1731-4629-b2b5-3271963d6c72 lvol 150 00:16:30.669 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bdc0b952-3bcb-4dbd-b75c-976a8836d6a7 00:16:30.669 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:30.669 07:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:30.669 [2024-07-13 07:04:00.104043] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:30.669 [2024-07-13 07:04:00.104160] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:30.669 true 00:16:30.669 07:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:30.669 07:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:31.234 07:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:31.234 07:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:31.234 07:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdc0b952-3bcb-4dbd-b75c-976a8836d6a7 00:16:31.492 07:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:31.753 [2024-07-13 07:04:01.167242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.753 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:32.011 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1496512 00:16:32.011 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:32.011 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:32.011 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1496512 /var/tmp/bdevperf.sock 00:16:32.011 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1496512 ']' 00:16:32.011 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.011 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.011 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.011 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.011 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:32.269 [2024-07-13 07:04:01.472755] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:32.269 [2024-07-13 07:04:01.472825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496512 ] 00:16:32.269 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.269 [2024-07-13 07:04:01.504386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:32.269 [2024-07-13 07:04:01.534189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.270 [2024-07-13 07:04:01.625182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.527 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.527 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:32.527 07:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:32.785 Nvme0n1 00:16:32.785 07:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:33.042 [ 00:16:33.042 { 00:16:33.042 "name": "Nvme0n1", 00:16:33.042 "aliases": [ 00:16:33.042 "bdc0b952-3bcb-4dbd-b75c-976a8836d6a7" 00:16:33.042 ], 00:16:33.042 "product_name": "NVMe disk", 00:16:33.042 "block_size": 4096, 00:16:33.042 "num_blocks": 38912, 00:16:33.042 "uuid": "bdc0b952-3bcb-4dbd-b75c-976a8836d6a7", 00:16:33.042 "assigned_rate_limits": { 00:16:33.042 "rw_ios_per_sec": 0, 00:16:33.042 "rw_mbytes_per_sec": 0, 00:16:33.042 "r_mbytes_per_sec": 0, 00:16:33.042 "w_mbytes_per_sec": 0 00:16:33.042 }, 00:16:33.042 "claimed": false, 00:16:33.042 "zoned": false, 00:16:33.042 "supported_io_types": { 00:16:33.042 "read": true, 00:16:33.042 "write": true, 00:16:33.042 "unmap": true, 00:16:33.042 "flush": true, 00:16:33.042 "reset": true, 00:16:33.042 "nvme_admin": true, 00:16:33.042 "nvme_io": true, 00:16:33.042 "nvme_io_md": false, 00:16:33.042 "write_zeroes": true, 00:16:33.042 "zcopy": false, 00:16:33.042 "get_zone_info": false, 00:16:33.042 "zone_management": false, 00:16:33.042 "zone_append": false, 00:16:33.042 "compare": true, 00:16:33.042 "compare_and_write": true, 00:16:33.042 "abort": true, 00:16:33.042 "seek_hole": false, 00:16:33.042 "seek_data": false, 00:16:33.042 "copy": true, 00:16:33.042 "nvme_iov_md": false 00:16:33.042 }, 00:16:33.042 "memory_domains": [ 00:16:33.042 { 00:16:33.042 "dma_device_id": "system", 00:16:33.043 "dma_device_type": 1 00:16:33.043 } 00:16:33.043 ], 00:16:33.043 "driver_specific": { 00:16:33.043 "nvme": [ 00:16:33.043 { 00:16:33.043 "trid": { 00:16:33.043 "trtype": "TCP", 00:16:33.043 "adrfam": "IPv4", 00:16:33.043 "traddr": "10.0.0.2", 00:16:33.043 "trsvcid": "4420", 00:16:33.043 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:33.043 }, 00:16:33.043 "ctrlr_data": { 00:16:33.043 "cntlid": 1, 00:16:33.043 "vendor_id": "0x8086", 00:16:33.043 "model_number": "SPDK bdev Controller", 00:16:33.043 "serial_number": "SPDK0", 00:16:33.043 "firmware_revision": "24.09", 00:16:33.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:33.043 "oacs": { 00:16:33.043 "security": 0, 00:16:33.043 "format": 0, 00:16:33.043 "firmware": 0, 00:16:33.043 "ns_manage": 0 00:16:33.043 }, 00:16:33.043 "multi_ctrlr": true, 00:16:33.043 "ana_reporting": false 00:16:33.043 }, 00:16:33.043 "vs": { 00:16:33.043 "nvme_version": "1.3" 00:16:33.043 }, 00:16:33.043 "ns_data": { 00:16:33.043 "id": 1, 00:16:33.043 "can_share": true 00:16:33.043 } 00:16:33.043 } 00:16:33.043 ], 00:16:33.043 "mp_policy": "active_passive" 00:16:33.043 } 00:16:33.043 } 00:16:33.043 ] 00:16:33.043 07:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1496647 00:16:33.043 07:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:33.043 07:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:33.043 Running I/O for 10 seconds... 00:16:33.973 Latency(us) 00:16:33.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.973 Nvme0n1 : 1.00 14296.00 55.84 0.00 0.00 0.00 0.00 0.00 00:16:33.973 =================================================================================================================== 00:16:33.973 Total : 14296.00 55.84 0.00 0.00 0.00 0.00 0.00 00:16:33.973 00:16:34.905 07:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:35.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.164 Nvme0n1 : 2.00 14421.00 56.33 0.00 0.00 0.00 0.00 0.00 00:16:35.164 =================================================================================================================== 00:16:35.164 Total : 14421.00 56.33 0.00 0.00 0.00 0.00 0.00 00:16:35.164 00:16:35.164 true 00:16:35.422 07:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:35.422 07:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:35.679 07:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:35.679 07:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:35.679 07:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1496647 00:16:36.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.244 Nvme0n1 : 3.00 14698.00 57.41 0.00 0.00 0.00 0.00 0.00 00:16:36.244 =================================================================================================================== 00:16:36.244 Total : 14698.00 57.41 0.00 0.00 0.00 0.00 0.00 00:16:36.244 00:16:37.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.175 Nvme0n1 : 4.00 14771.50 57.70 0.00 0.00 0.00 0.00 0.00 00:16:37.175 =================================================================================================================== 00:16:37.175 Total : 14771.50 57.70 0.00 0.00 0.00 0.00 0.00 00:16:37.175 00:16:38.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.105 Nvme0n1 : 5.00 14931.40 58.33 0.00 0.00 0.00 0.00 0.00 00:16:38.105 =================================================================================================================== 00:16:38.105 Total : 14931.40 58.33 0.00 0.00 0.00 0.00 0.00 00:16:38.105 00:16:39.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.037 Nvme0n1 : 6.00 14983.50 58.53 0.00 0.00 0.00 0.00 0.00 00:16:39.037 =================================================================================================================== 00:16:39.037 Total : 14983.50 58.53 0.00 0.00 0.00 0.00 0.00 00:16:39.037 00:16:40.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.408 Nvme0n1 : 7.00 15049.14 58.79 0.00 0.00 0.00 0.00 0.00 00:16:40.408 =================================================================================================================== 00:16:40.408 Total : 15049.14 58.79 0.00 0.00 0.00 0.00 0.00 00:16:40.408 00:16:41.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.342 Nvme0n1 : 8.00 15042.38 58.76 0.00 0.00 0.00 0.00 0.00 00:16:41.342 =================================================================================================================== 00:16:41.342 Total : 15042.38 58.76 0.00 0.00 0.00 0.00 0.00 00:16:41.342 00:16:42.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.274 Nvme0n1 : 9.00 15043.89 58.77 0.00 0.00 0.00 0.00 0.00 00:16:42.274 =================================================================================================================== 00:16:42.274 Total : 15043.89 58.77 0.00 0.00 0.00 0.00 0.00 00:16:42.274 00:16:43.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.208 Nvme0n1 : 10.00 15115.40 59.04 0.00 0.00 0.00 0.00 0.00 00:16:43.208 =================================================================================================================== 00:16:43.208 Total : 15115.40 59.04 0.00 0.00 0.00 0.00 0.00 00:16:43.208 00:16:43.208 00:16:43.208 Latency(us) 00:16:43.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.208 Nvme0n1 : 10.01 15113.40 59.04 0.00 0.00 8463.89 4903.06 17185.00 00:16:43.208 =================================================================================================================== 00:16:43.208 Total : 15113.40 59.04 0.00 0.00 8463.89 4903.06 17185.00 00:16:43.208 0 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1496512 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1496512 ']' 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1496512 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1496512 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1496512' 00:16:43.208 killing process with pid 1496512 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1496512 00:16:43.208 Received shutdown signal, test time was about 10.000000 seconds 00:16:43.208 00:16:43.208 Latency(us) 00:16:43.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.208 =================================================================================================================== 00:16:43.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:43.208 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1496512 00:16:43.466 07:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:43.724 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:43.981 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:43.981 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1493921 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1493921 00:16:44.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1493921 Killed "${NVMF_APP[@]}" "$@" 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1497954 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1497954 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1497954 ']' 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.240 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:44.240 [2024-07-13 07:04:13.631065] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:44.240 [2024-07-13 07:04:13.631160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.240 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.240 [2024-07-13 07:04:13.669485] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:44.497 [2024-07-13 07:04:13.701495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.497 [2024-07-13 07:04:13.795863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.497 [2024-07-13 07:04:13.795941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.498 [2024-07-13 07:04:13.795957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.498 [2024-07-13 07:04:13.795971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.498 [2024-07-13 07:04:13.795983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.498 [2024-07-13 07:04:13.796021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.498 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.498 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:44.498 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.498 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.498 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:44.498 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.498 07:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:44.758 [2024-07-13 07:04:14.166942] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:44.758 [2024-07-13 07:04:14.167093] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:44.758 [2024-07-13 07:04:14.167150] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:44.758 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:44.758 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bdc0b952-3bcb-4dbd-b75c-976a8836d6a7 00:16:44.758 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=bdc0b952-3bcb-4dbd-b75c-976a8836d6a7 00:16:44.758 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:44.758 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:44.758 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:44.758 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:44.758 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:45.037 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bdc0b952-3bcb-4dbd-b75c-976a8836d6a7 -t 2000 00:16:45.296 [ 00:16:45.296 { 00:16:45.296 "name": "bdc0b952-3bcb-4dbd-b75c-976a8836d6a7", 00:16:45.296 "aliases": [ 00:16:45.296 "lvs/lvol" 00:16:45.296 ], 00:16:45.296 "product_name": "Logical Volume", 00:16:45.296 "block_size": 4096, 00:16:45.296 "num_blocks": 38912, 00:16:45.296 "uuid": "bdc0b952-3bcb-4dbd-b75c-976a8836d6a7", 00:16:45.296 "assigned_rate_limits": { 00:16:45.296 "rw_ios_per_sec": 0, 00:16:45.296 "rw_mbytes_per_sec": 0, 00:16:45.296 "r_mbytes_per_sec": 0, 00:16:45.296 "w_mbytes_per_sec": 0 00:16:45.296 }, 00:16:45.296 "claimed": false, 00:16:45.296 "zoned": false, 00:16:45.296 "supported_io_types": { 00:16:45.296 "read": true, 00:16:45.296 "write": true, 00:16:45.296 "unmap": true, 00:16:45.296 "flush": false, 00:16:45.296 "reset": true, 00:16:45.296 "nvme_admin": false, 00:16:45.296 "nvme_io": false, 00:16:45.296 "nvme_io_md": false, 00:16:45.296 "write_zeroes": true, 00:16:45.296 "zcopy": false, 00:16:45.296 "get_zone_info": false, 00:16:45.296 "zone_management": false, 00:16:45.296 "zone_append": false, 00:16:45.296 "compare": false, 00:16:45.296 "compare_and_write": false, 00:16:45.296 "abort": false, 00:16:45.296 "seek_hole": true, 00:16:45.296 "seek_data": true, 00:16:45.296 "copy": false, 00:16:45.296 "nvme_iov_md": false 00:16:45.296 }, 00:16:45.296 "driver_specific": { 00:16:45.296 "lvol": { 00:16:45.296 "lvol_store_uuid": "51798037-1731-4629-b2b5-3271963d6c72", 00:16:45.296 "base_bdev": "aio_bdev", 00:16:45.296 "thin_provision": false, 00:16:45.296 "num_allocated_clusters": 38, 00:16:45.296 "snapshot": false, 00:16:45.296 "clone": false, 00:16:45.296 "esnap_clone": false 00:16:45.296 } 00:16:45.296 } 00:16:45.296 } 00:16:45.296 ] 00:16:45.296 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:45.296 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:45.296 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:45.555 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:45.555 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:45.555 07:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:45.813 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:45.813 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:46.071 [2024-07-13 07:04:15.411818] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:46.071 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:46.071 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:46.071 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:46.072 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.072 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.072 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.072 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.072 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.072 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.072 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.072 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:46.072 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:46.329 request: 00:16:46.329 { 00:16:46.329 "uuid": "51798037-1731-4629-b2b5-3271963d6c72", 00:16:46.329 "method": "bdev_lvol_get_lvstores", 00:16:46.329 "req_id": 1 00:16:46.329 } 00:16:46.329 Got JSON-RPC error response 00:16:46.329 response: 00:16:46.329 { 00:16:46.329 "code": -19, 00:16:46.329 "message": "No such device" 00:16:46.329 } 00:16:46.329 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:46.329 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:46.329 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:46.329 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:46.329 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:46.587 aio_bdev 00:16:46.587 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bdc0b952-3bcb-4dbd-b75c-976a8836d6a7 00:16:46.587 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=bdc0b952-3bcb-4dbd-b75c-976a8836d6a7 00:16:46.587 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:46.587 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:46.587 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:46.587 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:46.587 07:04:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:46.845 07:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bdc0b952-3bcb-4dbd-b75c-976a8836d6a7 -t 2000 00:16:47.104 [ 00:16:47.104 { 00:16:47.104 "name": "bdc0b952-3bcb-4dbd-b75c-976a8836d6a7", 00:16:47.104 "aliases": [ 00:16:47.104 "lvs/lvol" 00:16:47.104 ], 00:16:47.104 "product_name": "Logical Volume", 00:16:47.104 "block_size": 4096, 00:16:47.104 "num_blocks": 38912, 00:16:47.104 "uuid": "bdc0b952-3bcb-4dbd-b75c-976a8836d6a7", 00:16:47.104 "assigned_rate_limits": { 00:16:47.104 "rw_ios_per_sec": 0, 00:16:47.104 "rw_mbytes_per_sec": 0, 00:16:47.104 "r_mbytes_per_sec": 0, 00:16:47.104 "w_mbytes_per_sec": 0 00:16:47.104 }, 00:16:47.104 "claimed": false, 00:16:47.104 "zoned": false, 00:16:47.104 "supported_io_types": { 00:16:47.104 "read": true, 00:16:47.104 "write": true, 00:16:47.104 "unmap": true, 00:16:47.104 "flush": false, 00:16:47.104 "reset": true, 00:16:47.104 "nvme_admin": false, 00:16:47.104 "nvme_io": false, 00:16:47.104 "nvme_io_md": false, 00:16:47.104 "write_zeroes": true, 00:16:47.104 "zcopy": false, 00:16:47.104 "get_zone_info": false, 00:16:47.104 "zone_management": false, 00:16:47.104 "zone_append": false, 00:16:47.104 "compare": false, 00:16:47.104 "compare_and_write": false, 00:16:47.104 "abort": false, 00:16:47.104 "seek_hole": true, 00:16:47.104 "seek_data": true, 00:16:47.104 "copy": false, 00:16:47.104 "nvme_iov_md": false 00:16:47.104 }, 00:16:47.104 "driver_specific": { 00:16:47.104 "lvol": { 00:16:47.104 "lvol_store_uuid": "51798037-1731-4629-b2b5-3271963d6c72", 00:16:47.104 "base_bdev": "aio_bdev", 00:16:47.104 "thin_provision": false, 00:16:47.104 "num_allocated_clusters": 38, 00:16:47.104 "snapshot": false, 00:16:47.104 "clone": false, 00:16:47.104 "esnap_clone": false 00:16:47.104 } 00:16:47.104 } 00:16:47.104 } 00:16:47.104 ] 00:16:47.104 07:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:47.104 07:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:47.104 07:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:47.362 07:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:47.362 07:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:47.362 07:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:47.620 07:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:47.620 07:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bdc0b952-3bcb-4dbd-b75c-976a8836d6a7 00:16:47.879 07:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 51798037-1731-4629-b2b5-3271963d6c72 00:16:48.137 07:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:48.395 07:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:48.395 00:16:48.395 real 0m19.047s 00:16:48.395 user 0m48.397s 00:16:48.395 sys 0m4.652s 00:16:48.395 07:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:48.395 07:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:48.395 ************************************ 00:16:48.395 END TEST lvs_grow_dirty 00:16:48.395 ************************************ 00:16:48.395 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:48.396 07:04:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:48.396 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:16:48.396 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:16:48.396 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:48.396 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:48.396 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:48.396 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:48.396 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:48.396 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:48.396 nvmf_trace.0 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.654 rmmod nvme_tcp 00:16:48.654 rmmod nvme_fabrics 00:16:48.654 rmmod nvme_keyring 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1497954 ']' 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1497954 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1497954 ']' 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1497954 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1497954 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1497954' 00:16:48.654 killing process with pid 1497954 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1497954 00:16:48.654 07:04:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1497954 00:16:48.912 07:04:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.912 07:04:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.912 07:04:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.912 07:04:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.912 07:04:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.912 07:04:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.912 07:04:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.912 07:04:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.814 07:04:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.814 00:16:50.814 real 0m41.933s 00:16:50.814 user 1m10.272s 00:16:50.814 sys 0m8.889s 00:16:50.814 07:04:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.814 07:04:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:50.814 ************************************ 00:16:50.814 END TEST nvmf_lvs_grow 00:16:50.814 ************************************ 00:16:50.814 07:04:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:50.814 07:04:20 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:50.814 07:04:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.814 07:04:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.814 07:04:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.073 ************************************ 00:16:51.073 START TEST nvmf_bdev_io_wait 00:16:51.073 ************************************ 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:51.073 * Looking for test storage... 00:16:51.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.073 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.074 07:04:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.975 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:52.976 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:52.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:52.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:52.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.976 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:16:53.235 00:16:53.235 --- 10.0.0.2 ping statistics --- 00:16:53.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.235 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:16:53.235 00:16:53.235 --- 10.0.0.1 ping statistics --- 00:16:53.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.235 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1500377 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1500377 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1500377 ']' 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.235 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:53.235 [2024-07-13 07:04:22.577750] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:53.235 [2024-07-13 07:04:22.577840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.235 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.235 [2024-07-13 07:04:22.617704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:53.235 [2024-07-13 07:04:22.650474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.493 [2024-07-13 07:04:22.747241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.493 [2024-07-13 07:04:22.747307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.493 [2024-07-13 07:04:22.747333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.493 [2024-07-13 07:04:22.747347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.493 [2024-07-13 07:04:22.747358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.493 [2024-07-13 07:04:22.747447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.493 [2024-07-13 07:04:22.747529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.493 [2024-07-13 07:04:22.747630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.493 [2024-07-13 07:04:22.747632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:53.493 [2024-07-13 07:04:22.892941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:53.493 Malloc0 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.493 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:53.750 [2024-07-13 07:04:22.951223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1500518 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1500520 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.750 { 00:16:53.750 "params": { 00:16:53.750 "name": "Nvme$subsystem", 00:16:53.750 "trtype": "$TEST_TRANSPORT", 00:16:53.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.750 "adrfam": "ipv4", 00:16:53.750 "trsvcid": "$NVMF_PORT", 00:16:53.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.750 "hdgst": ${hdgst:-false}, 00:16:53.750 "ddgst": ${ddgst:-false} 00:16:53.750 }, 00:16:53.750 "method": "bdev_nvme_attach_controller" 00:16:53.750 } 00:16:53.750 EOF 00:16:53.750 )") 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1500522 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.750 { 00:16:53.750 "params": { 00:16:53.750 "name": "Nvme$subsystem", 00:16:53.750 "trtype": "$TEST_TRANSPORT", 00:16:53.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.750 "adrfam": "ipv4", 00:16:53.750 "trsvcid": "$NVMF_PORT", 00:16:53.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.750 "hdgst": ${hdgst:-false}, 00:16:53.750 "ddgst": ${ddgst:-false} 00:16:53.750 }, 00:16:53.750 "method": "bdev_nvme_attach_controller" 00:16:53.750 } 00:16:53.750 EOF 00:16:53.750 )") 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1500525 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.750 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.751 { 00:16:53.751 "params": { 00:16:53.751 "name": "Nvme$subsystem", 00:16:53.751 "trtype": "$TEST_TRANSPORT", 00:16:53.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.751 "adrfam": "ipv4", 00:16:53.751 "trsvcid": "$NVMF_PORT", 00:16:53.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.751 "hdgst": ${hdgst:-false}, 00:16:53.751 "ddgst": ${ddgst:-false} 00:16:53.751 }, 00:16:53.751 "method": "bdev_nvme_attach_controller" 00:16:53.751 } 00:16:53.751 EOF 00:16:53.751 )") 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.751 { 00:16:53.751 "params": { 00:16:53.751 "name": "Nvme$subsystem", 00:16:53.751 "trtype": "$TEST_TRANSPORT", 00:16:53.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.751 "adrfam": "ipv4", 00:16:53.751 "trsvcid": "$NVMF_PORT", 00:16:53.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.751 "hdgst": ${hdgst:-false}, 00:16:53.751 "ddgst": ${ddgst:-false} 00:16:53.751 }, 00:16:53.751 "method": "bdev_nvme_attach_controller" 00:16:53.751 } 00:16:53.751 EOF 00:16:53.751 )") 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1500518 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.751 "params": { 00:16:53.751 "name": "Nvme1", 00:16:53.751 "trtype": "tcp", 00:16:53.751 "traddr": "10.0.0.2", 00:16:53.751 "adrfam": "ipv4", 00:16:53.751 "trsvcid": "4420", 00:16:53.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.751 "hdgst": false, 00:16:53.751 "ddgst": false 00:16:53.751 }, 00:16:53.751 "method": "bdev_nvme_attach_controller" 00:16:53.751 }' 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.751 "params": { 00:16:53.751 "name": "Nvme1", 00:16:53.751 "trtype": "tcp", 00:16:53.751 "traddr": "10.0.0.2", 00:16:53.751 "adrfam": "ipv4", 00:16:53.751 "trsvcid": "4420", 00:16:53.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.751 "hdgst": false, 00:16:53.751 "ddgst": false 00:16:53.751 }, 00:16:53.751 "method": "bdev_nvme_attach_controller" 00:16:53.751 }' 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.751 "params": { 00:16:53.751 "name": "Nvme1", 00:16:53.751 "trtype": "tcp", 00:16:53.751 "traddr": "10.0.0.2", 00:16:53.751 "adrfam": "ipv4", 00:16:53.751 "trsvcid": "4420", 00:16:53.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.751 "hdgst": false, 00:16:53.751 "ddgst": false 00:16:53.751 }, 00:16:53.751 "method": "bdev_nvme_attach_controller" 00:16:53.751 }' 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:53.751 07:04:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.751 "params": { 00:16:53.751 "name": "Nvme1", 00:16:53.751 "trtype": "tcp", 00:16:53.751 "traddr": "10.0.0.2", 00:16:53.751 "adrfam": "ipv4", 00:16:53.751 "trsvcid": "4420", 00:16:53.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.751 "hdgst": false, 00:16:53.751 "ddgst": false 00:16:53.751 }, 00:16:53.751 "method": "bdev_nvme_attach_controller" 00:16:53.751 }' 00:16:53.751 [2024-07-13 07:04:22.997950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:53.751 [2024-07-13 07:04:22.997951] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:53.751 [2024-07-13 07:04:22.998043] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 07:04:22.998043] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:53.751 --proc-type=auto ] 00:16:53.751 [2024-07-13 07:04:22.998074] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:53.751 [2024-07-13 07:04:22.998078] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:53.751 [2024-07-13 07:04:22.998150] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 07:04:22.998149] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:53.751 --proc-type=auto ] 00:16:53.751 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.751 [2024-07-13 07:04:23.120460] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:53.751 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.751 [2024-07-13 07:04:23.149853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.751 [2024-07-13 07:04:23.196379] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:53.751 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.008 [2024-07-13 07:04:23.219212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:16:54.008 [2024-07-13 07:04:23.226043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.008 [2024-07-13 07:04:23.293219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:54.008 [2024-07-13 07:04:23.293771] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:54.008 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.008 [2024-07-13 07:04:23.323590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.008 [2024-07-13 07:04:23.393010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:54.008 [2024-07-13 07:04:23.398217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:54.008 [2024-07-13 07:04:23.423053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.266 [2024-07-13 07:04:23.498062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:54.266 Running I/O for 1 seconds... 00:16:54.523 Running I/O for 1 seconds... 00:16:54.523 Running I/O for 1 seconds... 00:16:54.523 Running I/O for 1 seconds... 00:16:55.456 00:16:55.456 Latency(us) 00:16:55.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.456 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:55.456 Nvme1n1 : 1.01 10715.32 41.86 0.00 0.00 11897.52 8349.77 19223.89 00:16:55.456 =================================================================================================================== 00:16:55.456 Total : 10715.32 41.86 0.00 0.00 11897.52 8349.77 19223.89 00:16:55.456 00:16:55.456 Latency(us) 00:16:55.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.456 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:55.456 Nvme1n1 : 1.00 197811.75 772.70 0.00 0.00 644.68 270.03 819.20 00:16:55.456 =================================================================================================================== 00:16:55.456 Total : 197811.75 772.70 0.00 0.00 644.68 270.03 819.20 00:16:55.456 00:16:55.456 Latency(us) 00:16:55.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.456 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:55.456 Nvme1n1 : 1.01 8228.37 32.14 0.00 0.00 15493.03 3665.16 24272.59 00:16:55.456 =================================================================================================================== 00:16:55.456 Total : 8228.37 32.14 0.00 0.00 15493.03 3665.16 24272.59 00:16:55.456 00:16:55.456 Latency(us) 00:16:55.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.456 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:55.456 Nvme1n1 : 1.01 8208.16 32.06 0.00 0.00 15543.04 7961.41 30098.01 00:16:55.456 =================================================================================================================== 00:16:55.456 Total : 8208.16 32.06 0.00 0.00 15543.04 7961.41 30098.01 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1500520 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1500522 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1500525 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.714 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.714 rmmod nvme_tcp 00:16:55.973 rmmod nvme_fabrics 00:16:55.973 rmmod nvme_keyring 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1500377 ']' 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1500377 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1500377 ']' 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1500377 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1500377 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1500377' 00:16:55.973 killing process with pid 1500377 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1500377 00:16:55.973 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1500377 00:16:56.233 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.233 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.233 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.233 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.233 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.233 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.233 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.233 07:04:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.134 07:04:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:58.134 00:16:58.134 real 0m7.225s 00:16:58.134 user 0m15.959s 00:16:58.134 sys 0m3.588s 00:16:58.134 07:04:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:58.134 07:04:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:58.134 ************************************ 00:16:58.134 END TEST nvmf_bdev_io_wait 00:16:58.134 ************************************ 00:16:58.134 07:04:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:58.134 07:04:27 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:58.134 07:04:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:58.134 07:04:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.134 07:04:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:58.134 ************************************ 00:16:58.135 START TEST nvmf_queue_depth 00:16:58.135 ************************************ 00:16:58.135 07:04:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:58.401 * Looking for test storage... 00:16:58.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:58.401 07:04:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:00.300 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:00.300 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:00.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:00.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.300 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:00.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:17:00.301 00:17:00.301 --- 10.0.0.2 ping statistics --- 00:17:00.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.301 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:17:00.301 00:17:00.301 --- 10.0.0.1 ping statistics --- 00:17:00.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.301 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1502741 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1502741 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1502741 ']' 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.301 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:00.301 [2024-07-13 07:04:29.712941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:00.301 [2024-07-13 07:04:29.713026] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.301 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.301 [2024-07-13 07:04:29.751508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:00.560 [2024-07-13 07:04:29.778081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.560 [2024-07-13 07:04:29.865620] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.560 [2024-07-13 07:04:29.865684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.561 [2024-07-13 07:04:29.865697] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.561 [2024-07-13 07:04:29.865709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.561 [2024-07-13 07:04:29.865718] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.561 [2024-07-13 07:04:29.865746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.561 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.561 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:00.561 07:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:00.561 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:00.561 07:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:00.561 07:04:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.561 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:00.561 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.561 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:00.561 [2024-07-13 07:04:30.010876] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.561 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.561 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:00.561 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:00.820 Malloc0 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:00.820 [2024-07-13 07:04:30.068490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1502763 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1502763 /var/tmp/bdevperf.sock 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1502763 ']' 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.820 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:00.820 [2024-07-13 07:04:30.115333] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:00.820 [2024-07-13 07:04:30.115398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502763 ] 00:17:00.820 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.820 [2024-07-13 07:04:30.147371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:00.820 [2024-07-13 07:04:30.177218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.820 [2024-07-13 07:04:30.268170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.078 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.078 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:01.078 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:01.078 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.078 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:01.078 NVMe0n1 00:17:01.078 07:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.078 07:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:01.336 Running I/O for 10 seconds... 00:17:11.303 00:17:11.303 Latency(us) 00:17:11.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.303 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:11.303 Verification LBA range: start 0x0 length 0x4000 00:17:11.303 NVMe0n1 : 10.08 8348.01 32.61 0.00 0.00 122031.30 18155.90 80002.47 00:17:11.303 =================================================================================================================== 00:17:11.303 Total : 8348.01 32.61 0.00 0.00 122031.30 18155.90 80002.47 00:17:11.303 0 00:17:11.303 07:04:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1502763 00:17:11.303 07:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1502763 ']' 00:17:11.303 07:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1502763 00:17:11.303 07:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:11.303 07:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.303 07:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1502763 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1502763' 00:17:11.560 killing process with pid 1502763 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1502763 00:17:11.560 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.560 00:17:11.560 Latency(us) 00:17:11.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.560 =================================================================================================================== 00:17:11.560 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1502763 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.560 07:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.560 rmmod nvme_tcp 00:17:11.560 rmmod nvme_fabrics 00:17:11.560 rmmod nvme_keyring 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1502741 ']' 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1502741 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1502741 ']' 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1502741 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1502741 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1502741' 00:17:11.818 killing process with pid 1502741 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1502741 00:17:11.818 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1502741 00:17:12.076 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:12.076 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:12.076 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:12.076 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.076 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.076 07:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.076 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.076 07:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.979 07:04:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:13.979 00:17:13.979 real 0m15.809s 00:17:13.979 user 0m22.350s 00:17:13.979 sys 0m2.993s 00:17:13.979 07:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.979 07:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:13.979 ************************************ 00:17:13.979 END TEST nvmf_queue_depth 00:17:13.979 ************************************ 00:17:13.979 07:04:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:13.979 07:04:43 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:13.979 07:04:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:13.980 07:04:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.980 07:04:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:13.980 ************************************ 00:17:13.980 START TEST nvmf_target_multipath 00:17:13.980 ************************************ 00:17:13.980 07:04:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:14.237 * Looking for test storage... 00:17:14.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.237 07:04:43 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.238 07:04:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:16.139 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.139 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:16.140 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:16.140 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:16.140 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:17:16.140 00:17:16.140 --- 10.0.0.2 ping statistics --- 00:17:16.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.140 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:17:16.140 00:17:16.140 --- 10.0.0.1 ping statistics --- 00:17:16.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.140 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:16.140 only one NIC for nvmf test 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.140 rmmod nvme_tcp 00:17:16.140 rmmod nvme_fabrics 00:17:16.140 rmmod nvme_keyring 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.140 07:04:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.670 00:17:18.670 real 0m4.122s 00:17:18.670 user 0m0.697s 00:17:18.670 sys 0m1.405s 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.670 07:04:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:18.670 ************************************ 00:17:18.670 END TEST nvmf_target_multipath 00:17:18.670 ************************************ 00:17:18.670 07:04:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:18.670 07:04:47 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:18.670 07:04:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:18.670 07:04:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.670 07:04:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:18.670 ************************************ 00:17:18.670 START TEST nvmf_zcopy 00:17:18.670 ************************************ 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:18.670 * Looking for test storage... 00:17:18.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.670 07:04:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:18.671 07:04:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:20.574 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.574 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:20.575 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:20.575 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:20.575 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:20.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:17:20.575 00:17:20.575 --- 10.0.0.2 ping statistics --- 00:17:20.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.575 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:17:20.575 00:17:20.575 --- 10.0.0.1 ping statistics --- 00:17:20.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.575 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1507811 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1507811 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1507811 ']' 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.575 07:04:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:20.575 [2024-07-13 07:04:49.835396] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:20.575 [2024-07-13 07:04:49.835494] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.575 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.575 [2024-07-13 07:04:49.876731] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:20.575 [2024-07-13 07:04:49.907660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.575 [2024-07-13 07:04:50.000183] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.575 [2024-07-13 07:04:50.000261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.575 [2024-07-13 07:04:50.000278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.575 [2024-07-13 07:04:50.000292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.575 [2024-07-13 07:04:50.000303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.575 [2024-07-13 07:04:50.000340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:20.832 [2024-07-13 07:04:50.146313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:20.832 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:20.833 [2024-07-13 07:04:50.162508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:20.833 malloc0 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:20.833 { 00:17:20.833 "params": { 00:17:20.833 "name": "Nvme$subsystem", 00:17:20.833 "trtype": "$TEST_TRANSPORT", 00:17:20.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:20.833 "adrfam": "ipv4", 00:17:20.833 "trsvcid": "$NVMF_PORT", 00:17:20.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:20.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:20.833 "hdgst": ${hdgst:-false}, 00:17:20.833 "ddgst": ${ddgst:-false} 00:17:20.833 }, 00:17:20.833 "method": "bdev_nvme_attach_controller" 00:17:20.833 } 00:17:20.833 EOF 00:17:20.833 )") 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:20.833 07:04:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:20.833 "params": { 00:17:20.833 "name": "Nvme1", 00:17:20.833 "trtype": "tcp", 00:17:20.833 "traddr": "10.0.0.2", 00:17:20.833 "adrfam": "ipv4", 00:17:20.833 "trsvcid": "4420", 00:17:20.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.833 "hdgst": false, 00:17:20.833 "ddgst": false 00:17:20.833 }, 00:17:20.833 "method": "bdev_nvme_attach_controller" 00:17:20.833 }' 00:17:20.833 [2024-07-13 07:04:50.246233] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:20.833 [2024-07-13 07:04:50.246319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507951 ] 00:17:20.833 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.833 [2024-07-13 07:04:50.284659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:21.090 [2024-07-13 07:04:50.316940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.090 [2024-07-13 07:04:50.411786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.346 Running I/O for 10 seconds... 00:17:33.554 00:17:33.554 Latency(us) 00:17:33.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.554 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:33.554 Verification LBA range: start 0x0 length 0x1000 00:17:33.554 Nvme1n1 : 10.01 5872.90 45.88 0.00 0.00 21734.02 573.44 31457.28 00:17:33.554 =================================================================================================================== 00:17:33.554 Total : 5872.90 45.88 0.00 0.00 21734.02 573.44 31457.28 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1509145 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:33.554 { 00:17:33.554 "params": { 00:17:33.554 "name": "Nvme$subsystem", 00:17:33.554 "trtype": "$TEST_TRANSPORT", 00:17:33.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:33.554 "adrfam": "ipv4", 00:17:33.554 "trsvcid": "$NVMF_PORT", 00:17:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:33.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:33.554 "hdgst": ${hdgst:-false}, 00:17:33.554 "ddgst": ${ddgst:-false} 00:17:33.554 }, 00:17:33.554 "method": "bdev_nvme_attach_controller" 00:17:33.554 } 00:17:33.554 EOF 00:17:33.554 )") 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:33.554 [2024-07-13 07:05:01.025138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.554 [2024-07-13 07:05:01.025199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:33.554 07:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:33.554 "params": { 00:17:33.554 "name": "Nvme1", 00:17:33.554 "trtype": "tcp", 00:17:33.554 "traddr": "10.0.0.2", 00:17:33.554 "adrfam": "ipv4", 00:17:33.554 "trsvcid": "4420", 00:17:33.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:33.555 "hdgst": false, 00:17:33.555 "ddgst": false 00:17:33.555 }, 00:17:33.555 "method": "bdev_nvme_attach_controller" 00:17:33.555 }' 00:17:33.555 [2024-07-13 07:05:01.033100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.033125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.041118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.041141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.049142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.049177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.057172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.057193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.064453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:33.555 [2024-07-13 07:05:01.064520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509145 ] 00:17:33.555 [2024-07-13 07:05:01.065196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.065233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.073228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.073249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.081247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.081267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.089265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.089285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.555 [2024-07-13 07:05:01.096261] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:33.555 [2024-07-13 07:05:01.097288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.097309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.105311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.105331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.113335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.113356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.121356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.121383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.125292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.555 [2024-07-13 07:05:01.129391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.129416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.137438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.137474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.145426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.145449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.153445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.153467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.161464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.161486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.169487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.169509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.177538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.177571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.185565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.185601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.193553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.193575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.201572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.201592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.209591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.209612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.217617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.217639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.219437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.555 [2024-07-13 07:05:01.225636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.225658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.233672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.233701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.241711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.241747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.249733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.249772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.257759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.257797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.265778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.265828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.273796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.273833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.281818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.555 [2024-07-13 07:05:01.281880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.555 [2024-07-13 07:05:01.289811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.289832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.297891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.297929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.305906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.305945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.313923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.313957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.321930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.321953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.329932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.329954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.337991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.338019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.345995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.346020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.354002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.354027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.362018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.362042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.370041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.370064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.378062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.378084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.386086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.386107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.394109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.394130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.402231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.402256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.410233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.410256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.418271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.418324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.426304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.426330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.434328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.434357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.442345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.442372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 Running I/O for 5 seconds... 00:17:33.556 [2024-07-13 07:05:01.450366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.450394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.466429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.466475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.479493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.479522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.492463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.492492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.505830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.505862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.518942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.518970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.532636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.532680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.545877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.545904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.558896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.558924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.571773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.571800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.584987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.585016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.598109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.598141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.611453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.611480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.624401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.624428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.637643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.637670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.650546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.650580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.663149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.663178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.676049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.676078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.688860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.688895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.701576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.701603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.713892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.713920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.556 [2024-07-13 07:05:01.726706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.556 [2024-07-13 07:05:01.726737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.739053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.739082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.751909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.751948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.764387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.764432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.776975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.777003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.789976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.790003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.802496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.802523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.815218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.815246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.828341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.828368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.840859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.840894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.853199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.853243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.865707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.865734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.878445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.878472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.890976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.891011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.903998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.904026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.916506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.916534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.928955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.928986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.941348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.941375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.954109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.954137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.966687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.966714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.979836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.979889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:01.992982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:01.993010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.005673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.005716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.019393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.019438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.031933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.031961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.044931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.044959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.058022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.058050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.070540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.070568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.083171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.083199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.096359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.096404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.109117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.109145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.122681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.122709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.135788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.135815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.148605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.148646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.162047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.162075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.174378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.174405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.187598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.187626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.199953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.199982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.212697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.212724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.225437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.225481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.238308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.238335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.557 [2024-07-13 07:05:02.250861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.557 [2024-07-13 07:05:02.250911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.263781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.263808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.276502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.276529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.289170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.289197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.301425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.301452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.314111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.314157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.327251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.327294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.340053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.340081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.353060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.353088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.365793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.365822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.378534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.378561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.391098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.391127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.404090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.404118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.417449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.417477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.430187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.430232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.442467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.442496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.455283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.455311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.468485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.468514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.481097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.481126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.493304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.493333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.505145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.505174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.517520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.517548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.530086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.530114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.542861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.542899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.555083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.555111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.567809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.567837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.579590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.579619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.592175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.592215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.604757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.604800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.616633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.616661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.629390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.629417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.643090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.643118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.654650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.654678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.666557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.666585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.678810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.678839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.691149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.691177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.703843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.703880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.716069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.716097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.727423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.727451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.739496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.739524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.751349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.751377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.763372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.763401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.775414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.775443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-07-13 07:05:02.787916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-07-13 07:05:02.787945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.800862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.800901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.813521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.813549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.825588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.825616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.837471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.837508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.849844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.849882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.861771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.861800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.873640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.873668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.886209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.886238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.898503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.898531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.910767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.910795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.922778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.922806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.934790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.934818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.946753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.946781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.958993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.959022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.971257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.971300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.983037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.983065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.559 [2024-07-13 07:05:02.995312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.559 [2024-07-13 07:05:02.995341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.007788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.007817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.020113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.020141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.032823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.032852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.045570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.045598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.058123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.058151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.070664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.070699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.082420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.082448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.095641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.095669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.108494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.108522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.121669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.121697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.134467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.134499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.147450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.147479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.161098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.161127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.174018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.174047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.187133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.187179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.199675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.199706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.212590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.212618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.226413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.226441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.239504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.239550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.252486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.252513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.818 [2024-07-13 07:05:03.265586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.818 [2024-07-13 07:05:03.265617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.278266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.278294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.291280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.291308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.304609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.304652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.317673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.317710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.330365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.330392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.343947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.343975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.357051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.357079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.370172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.370199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.382772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.382800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.395492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.395519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.408757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.408785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.421784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.421811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.434652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.434679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.447744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.447771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.460419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.460447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.473501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.473529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.486313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.486340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.499074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.499102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.511799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.511828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.075 [2024-07-13 07:05:03.524893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.075 [2024-07-13 07:05:03.524931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.333 [2024-07-13 07:05:03.537765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.333 [2024-07-13 07:05:03.537808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.333 [2024-07-13 07:05:03.550558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.333 [2024-07-13 07:05:03.550585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.333 [2024-07-13 07:05:03.564210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.333 [2024-07-13 07:05:03.564261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.333 [2024-07-13 07:05:03.577667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.333 [2024-07-13 07:05:03.577698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.590766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.590794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.603881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.603925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.616998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.617027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.630669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.630698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.643824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.643856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.656964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.656992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.670072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.670101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.683245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.683289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.696464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.696492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.709805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.709833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.722859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.722911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.735640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.735668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.747651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.747678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.761010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.761045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.773736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.773765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.334 [2024-07-13 07:05:03.786644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.334 [2024-07-13 07:05:03.786672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.799698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.799739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.812632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.812666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.826012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.826040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.839669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.839697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.853031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.853059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.865584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.865612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.878276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.878303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.891243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.891270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.904082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.904110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.917112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.917141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.930112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.930140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.942942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.942970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.956454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.956482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.969377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.969408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.982060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.982088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:03.994755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:03.994783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:04.007357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:04.007384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:04.020487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:04.020515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:04.033134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:04.033178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.592 [2024-07-13 07:05:04.045733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.592 [2024-07-13 07:05:04.045764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.849 [2024-07-13 07:05:04.058992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.849 [2024-07-13 07:05:04.059021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.849 [2024-07-13 07:05:04.072727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.849 [2024-07-13 07:05:04.072754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.849 [2024-07-13 07:05:04.085931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.849 [2024-07-13 07:05:04.085974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.849 [2024-07-13 07:05:04.098268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.849 [2024-07-13 07:05:04.098313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.849 [2024-07-13 07:05:04.111639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.849 [2024-07-13 07:05:04.111667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.849 [2024-07-13 07:05:04.124775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.849 [2024-07-13 07:05:04.124801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.849 [2024-07-13 07:05:04.137524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.849 [2024-07-13 07:05:04.137551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.150651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.150693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.164183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.164225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.177344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.177371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.190428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.190470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.203210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.203237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.215767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.215798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.228277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.228304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.240378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.240405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.252860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.252914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.265995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.266023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.278642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.278669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.291020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.291048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.850 [2024-07-13 07:05:04.303780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.850 [2024-07-13 07:05:04.303807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.107 [2024-07-13 07:05:04.316212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.107 [2024-07-13 07:05:04.316255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.107 [2024-07-13 07:05:04.329542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.107 [2024-07-13 07:05:04.329585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.107 [2024-07-13 07:05:04.342459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.107 [2024-07-13 07:05:04.342486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.107 [2024-07-13 07:05:04.355203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.107 [2024-07-13 07:05:04.355230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.107 [2024-07-13 07:05:04.367804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.107 [2024-07-13 07:05:04.367835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.107 [2024-07-13 07:05:04.380732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.107 [2024-07-13 07:05:04.380759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.107 [2024-07-13 07:05:04.393722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.107 [2024-07-13 07:05:04.393749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.107 [2024-07-13 07:05:04.407404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.107 [2024-07-13 07:05:04.407430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.107 [2024-07-13 07:05:04.420817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.107 [2024-07-13 07:05:04.420861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.434095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.434123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.446683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.446711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.459261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.459288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.471974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.472002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.484766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.484794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.497471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.497498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.510007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.510034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.522764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.522794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.535694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.535726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.548076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.548104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.108 [2024-07-13 07:05:04.560475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.108 [2024-07-13 07:05:04.560503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.574078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.574106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.586901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.586929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.598777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.598805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.611545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.611572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.624575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.624617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.637911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.637939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.651308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.651336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.664086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.664114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.677120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.677164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.690135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.690164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.703293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.703320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.716002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.716044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.728797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.728828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.741120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.741163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.754038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.754067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.767230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.767276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.779997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.780033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.793007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.793036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.805528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.805556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.366 [2024-07-13 07:05:04.818199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.366 [2024-07-13 07:05:04.818226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.830825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.830857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.843333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.843361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.856226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.856253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.868687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.868718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.881954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.881982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.894493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.894521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.907368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.907411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.919944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.919975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.933265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.933291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.945775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.945802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.958411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.958442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.970452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.970483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.983412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.983439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:04.996554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:04.996581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:05.009443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:05.009470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:05.022378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:05.022435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:05.034811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:05.034838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:05.047782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:05.047809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:05.060239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:05.060266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.625 [2024-07-13 07:05:05.073132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.625 [2024-07-13 07:05:05.073174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.086218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.086246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.099124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.099152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.111815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.111842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.124541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.124568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.137428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.137455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.150203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.150230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.163068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.163096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.175745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.175771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.188863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.188898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.202186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.202214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.215084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.215112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.227911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.227939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.240496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.240524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.253079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.253107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.265797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.265847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.278182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.278208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.291095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.291123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.303800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.303827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.317005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.317032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.883 [2024-07-13 07:05:05.330733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.883 [2024-07-13 07:05:05.330760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.343247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.343274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.356218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.356246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.369151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.369192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.382259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.382287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.395209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.395236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.408362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.408390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.421388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.421419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.434008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.434036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.446433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.446465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.459440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.459467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.472337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.472379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.485102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.485130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.497568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.497595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.510104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.510142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.522583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.522610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.535669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.535698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.548694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.548726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.562089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.562117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.575368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.575395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.141 [2024-07-13 07:05:05.588423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.141 [2024-07-13 07:05:05.588450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.601380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.601407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.614212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.614239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.626960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.626988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.640115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.640146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.653193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.653220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.666722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.666749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.679993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.680021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.692749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.692777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.705885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.705929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.718636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.718664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.730837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.730889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.742927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.742954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.755607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.755640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.767950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.767978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.780330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.780376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.793333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.793377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.806596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.806642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.819924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.819952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.832778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.832805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.399 [2024-07-13 07:05:05.845269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.399 [2024-07-13 07:05:05.845296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.857917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.857945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.869996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.870024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.882573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.882601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.894700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.894730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.906856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.906893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.919698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.919727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.931528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.931557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.943901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.943929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.956018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.956045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.968127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.968155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.980234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.980263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:05.992315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:05.992344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:06.005061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:06.005089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:06.017420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:06.017448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:06.029528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:06.029557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:06.042307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:06.042351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:06.055035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:06.055063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:06.067439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:06.067467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:06.079815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:06.079843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:06.091936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:06.091963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.657 [2024-07-13 07:05:06.103964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.657 [2024-07-13 07:05:06.103992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.115330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.115372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.127176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.127204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.139738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.139766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.152259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.152287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.164595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.164623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.176557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.176585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.189055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.189084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.201760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.201788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.213755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.213783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.225943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.225970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.238844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.238884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.251093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.251121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.263248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.263277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.275757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.275785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.288209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.288237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.915 [2024-07-13 07:05:06.300823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.915 [2024-07-13 07:05:06.300851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.916 [2024-07-13 07:05:06.313370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.916 [2024-07-13 07:05:06.313413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.916 [2024-07-13 07:05:06.326266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.916 [2024-07-13 07:05:06.326295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.916 [2024-07-13 07:05:06.338393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.916 [2024-07-13 07:05:06.338421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.916 [2024-07-13 07:05:06.350584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.916 [2024-07-13 07:05:06.350612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.916 [2024-07-13 07:05:06.362735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.916 [2024-07-13 07:05:06.362764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.375284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.375312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.387609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.387637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.399643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.399672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.412431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.412459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.425313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.425341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.437993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.438022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.451356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.451383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.465088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.465119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.470946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.470971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 00:17:37.174 Latency(us) 00:17:37.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.174 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:37.174 Nvme1n1 : 5.01 9970.49 77.89 0.00 0.00 12817.85 5679.79 27962.03 00:17:37.174 =================================================================================================================== 00:17:37.174 Total : 9970.49 77.89 0.00 0.00 12817.85 5679.79 27962.03 00:17:37.174 [2024-07-13 07:05:06.478957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.478981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.486970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.486993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.495027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.495077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.507082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.507145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.515079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.515128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.523099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.523148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.531121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.531170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.539141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.539189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.547176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.547226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.555197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.174 [2024-07-13 07:05:06.555248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.174 [2024-07-13 07:05:06.563214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.175 [2024-07-13 07:05:06.563263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.175 [2024-07-13 07:05:06.571226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.175 [2024-07-13 07:05:06.571275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.175 [2024-07-13 07:05:06.579250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.175 [2024-07-13 07:05:06.579297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.175 [2024-07-13 07:05:06.587267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.175 [2024-07-13 07:05:06.587326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.175 [2024-07-13 07:05:06.595289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.175 [2024-07-13 07:05:06.595338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.175 [2024-07-13 07:05:06.603311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.175 [2024-07-13 07:05:06.603358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.175 [2024-07-13 07:05:06.611330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.175 [2024-07-13 07:05:06.611371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.175 [2024-07-13 07:05:06.619332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.175 [2024-07-13 07:05:06.619358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.175 [2024-07-13 07:05:06.627382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.175 [2024-07-13 07:05:06.627424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.432 [2024-07-13 07:05:06.635412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.432 [2024-07-13 07:05:06.635458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.432 [2024-07-13 07:05:06.643433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.432 [2024-07-13 07:05:06.643480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.432 [2024-07-13 07:05:06.651423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.432 [2024-07-13 07:05:06.651452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.432 [2024-07-13 07:05:06.659433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.432 [2024-07-13 07:05:06.659461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.432 [2024-07-13 07:05:06.667510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.432 [2024-07-13 07:05:06.667560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.432 [2024-07-13 07:05:06.675522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.432 [2024-07-13 07:05:06.675571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.432 [2024-07-13 07:05:06.683505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.432 [2024-07-13 07:05:06.683531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.432 [2024-07-13 07:05:06.691525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.432 [2024-07-13 07:05:06.691549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.432 [2024-07-13 07:05:06.699546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.432 [2024-07-13 07:05:06.699571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1509145) - No such process 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1509145 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:37.432 delay0 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.432 07:05:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:37.432 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.432 [2024-07-13 07:05:06.863031] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:45.536 Initializing NVMe Controllers 00:17:45.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:45.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:45.536 Initialization complete. Launching workers. 00:17:45.536 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 20254 00:17:45.536 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20369, failed to submit 123 00:17:45.536 success 20275, unsuccess 94, failed 0 00:17:45.536 07:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:45.536 07:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:45.536 07:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:45.536 07:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:45.536 07:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:45.536 07:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:45.536 07:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:45.536 07:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:45.536 rmmod nvme_tcp 00:17:45.536 rmmod nvme_fabrics 00:17:45.536 rmmod nvme_keyring 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1507811 ']' 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1507811 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1507811 ']' 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1507811 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1507811 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1507811' 00:17:45.536 killing process with pid 1507811 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1507811 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1507811 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.536 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.537 07:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.537 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.537 07:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.912 07:05:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:46.912 00:17:46.912 real 0m28.759s 00:17:46.912 user 0m41.261s 00:17:46.912 sys 0m9.908s 00:17:46.912 07:05:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.912 07:05:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.912 ************************************ 00:17:46.912 END TEST nvmf_zcopy 00:17:46.912 ************************************ 00:17:46.912 07:05:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:46.912 07:05:16 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:46.912 07:05:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:46.912 07:05:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.912 07:05:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:47.169 ************************************ 00:17:47.169 START TEST nvmf_nmic 00:17:47.169 ************************************ 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:47.169 * Looking for test storage... 00:17:47.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.169 07:05:16 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:47.170 07:05:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.067 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:49.068 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:49.068 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:49.068 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:49.068 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:49.068 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:49.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:17:49.326 00:17:49.326 --- 10.0.0.2 ping statistics --- 00:17:49.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.326 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:17:49.326 00:17:49.326 --- 10.0.0.1 ping statistics --- 00:17:49.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.326 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1512639 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1512639 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1512639 ']' 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.326 07:05:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.326 [2024-07-13 07:05:18.718611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:49.326 [2024-07-13 07:05:18.718695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.326 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.326 [2024-07-13 07:05:18.757790] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:49.584 [2024-07-13 07:05:18.790571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.584 [2024-07-13 07:05:18.885474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.584 [2024-07-13 07:05:18.885536] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.584 [2024-07-13 07:05:18.885554] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.584 [2024-07-13 07:05:18.885567] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.584 [2024-07-13 07:05:18.885579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.584 [2024-07-13 07:05:18.885666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.584 [2024-07-13 07:05:18.885722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.584 [2024-07-13 07:05:18.885780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.584 [2024-07-13 07:05:18.885782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.584 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.584 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:49.584 07:05:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.584 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:49.584 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.842 [2024-07-13 07:05:19.044809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.842 Malloc0 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.842 [2024-07-13 07:05:19.095951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:49.842 test case1: single bdev can't be used in multiple subsystems 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:49.842 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.843 [2024-07-13 07:05:19.119799] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:49.843 [2024-07-13 07:05:19.119828] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:49.843 [2024-07-13 07:05:19.119859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.843 request: 00:17:49.843 { 00:17:49.843 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:49.843 "namespace": { 00:17:49.843 "bdev_name": "Malloc0", 00:17:49.843 "no_auto_visible": false 00:17:49.843 }, 00:17:49.843 "method": "nvmf_subsystem_add_ns", 00:17:49.843 "req_id": 1 00:17:49.843 } 00:17:49.843 Got JSON-RPC error response 00:17:49.843 response: 00:17:49.843 { 00:17:49.843 "code": -32602, 00:17:49.843 "message": "Invalid parameters" 00:17:49.843 } 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:49.843 Adding namespace failed - expected result. 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:49.843 test case2: host connect to nvmf target in multiple paths 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.843 [2024-07-13 07:05:19.127938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.843 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:50.407 07:05:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:51.336 07:05:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:51.336 07:05:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:51.336 07:05:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:51.336 07:05:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:51.336 07:05:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:53.229 07:05:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:53.229 07:05:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:53.229 07:05:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:53.229 07:05:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:53.229 07:05:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.229 07:05:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:53.229 07:05:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:53.229 [global] 00:17:53.229 thread=1 00:17:53.229 invalidate=1 00:17:53.229 rw=write 00:17:53.229 time_based=1 00:17:53.229 runtime=1 00:17:53.229 ioengine=libaio 00:17:53.229 direct=1 00:17:53.229 bs=4096 00:17:53.229 iodepth=1 00:17:53.229 norandommap=0 00:17:53.229 numjobs=1 00:17:53.229 00:17:53.229 verify_dump=1 00:17:53.229 verify_backlog=512 00:17:53.229 verify_state_save=0 00:17:53.229 do_verify=1 00:17:53.229 verify=crc32c-intel 00:17:53.229 [job0] 00:17:53.229 filename=/dev/nvme0n1 00:17:53.229 Could not set queue depth (nvme0n1) 00:17:53.229 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:53.229 fio-3.35 00:17:53.229 Starting 1 thread 00:17:54.603 00:17:54.603 job0: (groupid=0, jobs=1): err= 0: pid=1513158: Sat Jul 13 07:05:23 2024 00:17:54.603 read: IOPS=524, BW=2100KiB/s (2150kB/s)(2148KiB/1023msec) 00:17:54.603 slat (nsec): min=6014, max=62253, avg=17297.14, stdev=8495.98 00:17:54.603 clat (usec): min=245, max=41002, avg=1447.62, stdev=6689.84 00:17:54.603 lat (usec): min=258, max=41030, avg=1464.92, stdev=6691.82 00:17:54.603 clat percentiles (usec): 00:17:54.603 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 273], 00:17:54.603 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 318], 00:17:54.603 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 392], 95.00th=[ 482], 00:17:54.603 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:54.603 | 99.99th=[41157] 00:17:54.603 write: IOPS=1000, BW=4004KiB/s (4100kB/s)(4096KiB/1023msec); 0 zone resets 00:17:54.603 slat (usec): min=7, max=24895, avg=37.60, stdev=777.60 00:17:54.603 clat (usec): min=155, max=484, avg=183.65, stdev=30.43 00:17:54.603 lat (usec): min=165, max=25164, avg=221.25, stdev=780.97 00:17:54.603 clat percentiles (usec): 00:17:54.603 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:17:54.603 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:17:54.603 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 231], 00:17:54.603 | 99.00th=[ 338], 99.50th=[ 429], 99.90th=[ 441], 99.95th=[ 486], 00:17:54.603 | 99.99th=[ 486] 00:17:54.603 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:17:54.603 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:54.603 lat (usec) : 250=64.45%, 500=34.34%, 750=0.26% 00:17:54.603 lat (msec) : 50=0.96% 00:17:54.603 cpu : usr=1.57%, sys=1.86%, ctx=1564, majf=0, minf=2 00:17:54.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.603 issued rwts: total=537,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.603 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.603 00:17:54.603 Run status group 0 (all jobs): 00:17:54.603 READ: bw=2100KiB/s (2150kB/s), 2100KiB/s-2100KiB/s (2150kB/s-2150kB/s), io=2148KiB (2200kB), run=1023-1023msec 00:17:54.603 WRITE: bw=4004KiB/s (4100kB/s), 4004KiB/s-4004KiB/s (4100kB/s-4100kB/s), io=4096KiB (4194kB), run=1023-1023msec 00:17:54.603 00:17:54.603 Disk stats (read/write): 00:17:54.603 nvme0n1: ios=558/1024, merge=0/0, ticks=1601/184, in_queue=1785, util=98.30% 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.603 07:05:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:54.603 rmmod nvme_tcp 00:17:54.603 rmmod nvme_fabrics 00:17:54.603 rmmod nvme_keyring 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1512639 ']' 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1512639 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1512639 ']' 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1512639 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1512639 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1512639' 00:17:54.603 killing process with pid 1512639 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1512639 00:17:54.603 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1512639 00:17:54.861 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.861 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:54.861 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:54.861 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.861 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.861 07:05:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.861 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.861 07:05:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.395 07:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:57.395 00:17:57.395 real 0m9.948s 00:17:57.395 user 0m22.397s 00:17:57.395 sys 0m2.347s 00:17:57.395 07:05:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.395 07:05:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:57.395 ************************************ 00:17:57.395 END TEST nvmf_nmic 00:17:57.395 ************************************ 00:17:57.395 07:05:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:57.395 07:05:26 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:57.395 07:05:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:57.395 07:05:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.395 07:05:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:57.395 ************************************ 00:17:57.395 START TEST nvmf_fio_target 00:17:57.395 ************************************ 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:57.395 * Looking for test storage... 00:17:57.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.395 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:57.396 07:05:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:59.294 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:59.294 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.294 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:59.295 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:59.295 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:59.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:17:59.295 00:17:59.295 --- 10.0.0.2 ping statistics --- 00:17:59.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.295 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:17:59.295 00:17:59.295 --- 10.0.0.1 ping statistics --- 00:17:59.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.295 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1515226 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1515226 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1515226 ']' 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.295 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.295 [2024-07-13 07:05:28.589689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:59.295 [2024-07-13 07:05:28.589774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.295 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.295 [2024-07-13 07:05:28.628521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:59.295 [2024-07-13 07:05:28.661671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.552 [2024-07-13 07:05:28.758128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.552 [2024-07-13 07:05:28.758197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.552 [2024-07-13 07:05:28.758214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.552 [2024-07-13 07:05:28.758228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.553 [2024-07-13 07:05:28.758248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.553 [2024-07-13 07:05:28.758326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.553 [2024-07-13 07:05:28.758395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.553 [2024-07-13 07:05:28.758464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.553 [2024-07-13 07:05:28.758467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.553 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.553 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:59.553 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.553 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:59.553 07:05:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.553 07:05:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.553 07:05:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:59.809 [2024-07-13 07:05:29.126452] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.809 07:05:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.067 07:05:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:00.067 07:05:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.324 07:05:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:00.324 07:05:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.581 07:05:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:00.581 07:05:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.870 07:05:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:00.870 07:05:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:01.128 07:05:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:01.385 07:05:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:01.385 07:05:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:01.643 07:05:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:01.643 07:05:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:01.900 07:05:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:01.900 07:05:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:02.157 07:05:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:02.414 07:05:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:02.414 07:05:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:02.671 07:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:02.671 07:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:02.928 07:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.186 [2024-07-13 07:05:32.479436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.186 07:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:03.443 07:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:03.699 07:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:04.261 07:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:04.261 07:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:04.261 07:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.261 07:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:04.261 07:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:04.261 07:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:18:06.154 07:05:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:06.154 07:05:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:06.154 07:05:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:06.154 07:05:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:18:06.154 07:05:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.154 07:05:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:18:06.154 07:05:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:06.411 [global] 00:18:06.411 thread=1 00:18:06.411 invalidate=1 00:18:06.411 rw=write 00:18:06.411 time_based=1 00:18:06.411 runtime=1 00:18:06.411 ioengine=libaio 00:18:06.411 direct=1 00:18:06.411 bs=4096 00:18:06.411 iodepth=1 00:18:06.411 norandommap=0 00:18:06.411 numjobs=1 00:18:06.411 00:18:06.411 verify_dump=1 00:18:06.411 verify_backlog=512 00:18:06.411 verify_state_save=0 00:18:06.411 do_verify=1 00:18:06.411 verify=crc32c-intel 00:18:06.411 [job0] 00:18:06.411 filename=/dev/nvme0n1 00:18:06.411 [job1] 00:18:06.411 filename=/dev/nvme0n2 00:18:06.411 [job2] 00:18:06.411 filename=/dev/nvme0n3 00:18:06.411 [job3] 00:18:06.411 filename=/dev/nvme0n4 00:18:06.411 Could not set queue depth (nvme0n1) 00:18:06.411 Could not set queue depth (nvme0n2) 00:18:06.411 Could not set queue depth (nvme0n3) 00:18:06.411 Could not set queue depth (nvme0n4) 00:18:06.411 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.412 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.412 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.412 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.412 fio-3.35 00:18:06.412 Starting 4 threads 00:18:07.782 00:18:07.782 job0: (groupid=0, jobs=1): err= 0: pid=1516296: Sat Jul 13 07:05:37 2024 00:18:07.782 read: IOPS=1085, BW=4344KiB/s (4448kB/s)(4348KiB/1001msec) 00:18:07.782 slat (nsec): min=4765, max=66640, avg=17810.36, stdev=9362.71 00:18:07.782 clat (usec): min=312, max=2103, avg=454.59, stdev=89.42 00:18:07.783 lat (usec): min=328, max=2118, avg=472.40, stdev=90.68 00:18:07.783 clat percentiles (usec): 00:18:07.783 | 1.00th=[ 334], 5.00th=[ 363], 10.00th=[ 375], 20.00th=[ 396], 00:18:07.783 | 30.00th=[ 412], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 461], 00:18:07.783 | 70.00th=[ 482], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 578], 00:18:07.783 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 1729], 99.95th=[ 2114], 00:18:07.783 | 99.99th=[ 2114] 00:18:07.783 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:07.783 slat (usec): min=6, max=15135, avg=27.52, stdev=385.89 00:18:07.783 clat (usec): min=172, max=539, avg=281.08, stdev=62.21 00:18:07.783 lat (usec): min=179, max=15656, avg=308.60, stdev=396.94 00:18:07.783 clat percentiles (usec): 00:18:07.783 | 1.00th=[ 182], 5.00th=[ 196], 10.00th=[ 210], 20.00th=[ 229], 00:18:07.783 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 265], 60.00th=[ 289], 00:18:07.783 | 70.00th=[ 314], 80.00th=[ 338], 90.00th=[ 379], 95.00th=[ 388], 00:18:07.783 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 523], 99.95th=[ 537], 00:18:07.783 | 99.99th=[ 537] 00:18:07.783 bw ( KiB/s): min= 6192, max= 6192, per=27.79%, avg=6192.00, stdev= 0.00, samples=1 00:18:07.783 iops : min= 1548, max= 1548, avg=1548.00, stdev= 0.00, samples=1 00:18:07.783 lat (usec) : 250=23.75%, 500=66.98%, 750=9.19% 00:18:07.783 lat (msec) : 2=0.04%, 4=0.04% 00:18:07.783 cpu : usr=2.80%, sys=4.40%, ctx=2626, majf=0, minf=2 00:18:07.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.783 issued rwts: total=1087,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.783 job1: (groupid=0, jobs=1): err= 0: pid=1516297: Sat Jul 13 07:05:37 2024 00:18:07.783 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:07.783 slat (nsec): min=5843, max=50035, avg=10882.75, stdev=5399.26 00:18:07.783 clat (usec): min=297, max=962, avg=373.20, stdev=67.23 00:18:07.783 lat (usec): min=303, max=972, avg=384.09, stdev=67.49 00:18:07.783 clat percentiles (usec): 00:18:07.783 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:18:07.783 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 363], 00:18:07.783 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 474], 95.00th=[ 510], 00:18:07.783 | 99.00th=[ 611], 99.50th=[ 660], 99.90th=[ 898], 99.95th=[ 963], 00:18:07.783 | 99.99th=[ 963] 00:18:07.783 write: IOPS=1744, BW=6977KiB/s (7144kB/s)(6984KiB/1001msec); 0 zone resets 00:18:07.783 slat (nsec): min=7488, max=66106, avg=15250.59, stdev=8308.70 00:18:07.783 clat (usec): min=169, max=742, avg=212.92, stdev=29.26 00:18:07.783 lat (usec): min=178, max=771, avg=228.17, stdev=33.35 00:18:07.783 clat percentiles (usec): 00:18:07.783 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:18:07.783 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 217], 00:18:07.783 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 249], 00:18:07.783 | 99.00th=[ 306], 99.50th=[ 383], 99.90th=[ 474], 99.95th=[ 742], 00:18:07.783 | 99.99th=[ 742] 00:18:07.783 bw ( KiB/s): min= 8192, max= 8192, per=36.77%, avg=8192.00, stdev= 0.00, samples=1 00:18:07.783 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:07.783 lat (usec) : 250=50.67%, 500=46.34%, 750=2.89%, 1000=0.09% 00:18:07.783 cpu : usr=2.80%, sys=6.10%, ctx=3284, majf=0, minf=1 00:18:07.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.783 issued rwts: total=1536,1746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.783 job2: (groupid=0, jobs=1): err= 0: pid=1516304: Sat Jul 13 07:05:37 2024 00:18:07.783 read: IOPS=504, BW=2018KiB/s (2066kB/s)(2020KiB/1001msec) 00:18:07.783 slat (nsec): min=5128, max=35474, avg=13007.30, stdev=3447.68 00:18:07.783 clat (usec): min=260, max=41059, avg=1720.02, stdev=6997.23 00:18:07.783 lat (usec): min=273, max=41072, avg=1733.02, stdev=6997.34 00:18:07.783 clat percentiles (usec): 00:18:07.783 | 1.00th=[ 285], 5.00th=[ 330], 10.00th=[ 367], 20.00th=[ 392], 00:18:07.783 | 30.00th=[ 408], 40.00th=[ 441], 50.00th=[ 469], 60.00th=[ 486], 00:18:07.783 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 578], 95.00th=[ 619], 00:18:07.783 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:07.783 | 99.99th=[41157] 00:18:07.783 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:07.783 slat (nsec): min=7607, max=52998, avg=15552.29, stdev=6727.06 00:18:07.783 clat (usec): min=186, max=370, avg=221.30, stdev=19.54 00:18:07.783 lat (usec): min=196, max=391, avg=236.86, stdev=21.27 00:18:07.783 clat percentiles (usec): 00:18:07.783 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:18:07.783 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:18:07.783 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 251], 00:18:07.783 | 99.00th=[ 293], 99.50th=[ 347], 99.90th=[ 371], 99.95th=[ 371], 00:18:07.783 | 99.99th=[ 371] 00:18:07.783 bw ( KiB/s): min= 4096, max= 4096, per=18.38%, avg=4096.00, stdev= 0.00, samples=1 00:18:07.783 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:07.783 lat (usec) : 250=47.59%, 500=36.48%, 750=14.36% 00:18:07.783 lat (msec) : 50=1.57% 00:18:07.783 cpu : usr=1.10%, sys=1.10%, ctx=1017, majf=0, minf=1 00:18:07.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.783 issued rwts: total=505,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.783 job3: (groupid=0, jobs=1): err= 0: pid=1516305: Sat Jul 13 07:05:37 2024 00:18:07.783 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:07.783 slat (nsec): min=5877, max=59482, avg=10408.53, stdev=5453.61 00:18:07.783 clat (usec): min=312, max=564, avg=357.31, stdev=23.79 00:18:07.783 lat (usec): min=319, max=579, avg=367.72, stdev=26.29 00:18:07.783 clat percentiles (usec): 00:18:07.783 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 330], 20.00th=[ 338], 00:18:07.783 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:18:07.783 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 396], 00:18:07.783 | 99.00th=[ 416], 99.50th=[ 490], 99.90th=[ 553], 99.95th=[ 562], 00:18:07.783 | 99.99th=[ 562] 00:18:07.783 write: IOPS=1780, BW=7121KiB/s (7292kB/s)(7128KiB/1001msec); 0 zone resets 00:18:07.783 slat (nsec): min=6547, max=60275, avg=14479.42, stdev=7982.71 00:18:07.783 clat (usec): min=175, max=376, avg=222.99, stdev=26.16 00:18:07.783 lat (usec): min=183, max=411, avg=237.47, stdev=30.58 00:18:07.783 clat percentiles (usec): 00:18:07.783 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:18:07.783 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 227], 00:18:07.783 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 260], 95.00th=[ 273], 00:18:07.783 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 359], 99.95th=[ 375], 00:18:07.783 | 99.99th=[ 375] 00:18:07.783 bw ( KiB/s): min= 8192, max= 8192, per=36.77%, avg=8192.00, stdev= 0.00, samples=1 00:18:07.783 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:07.783 lat (usec) : 250=45.90%, 500=53.98%, 750=0.12% 00:18:07.783 cpu : usr=3.70%, sys=5.10%, ctx=3319, majf=0, minf=1 00:18:07.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.783 issued rwts: total=1536,1782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.783 00:18:07.783 Run status group 0 (all jobs): 00:18:07.783 READ: bw=18.2MiB/s (19.1MB/s), 2018KiB/s-6138KiB/s (2066kB/s-6285kB/s), io=18.2MiB (19.1MB), run=1001-1001msec 00:18:07.783 WRITE: bw=21.8MiB/s (22.8MB/s), 2046KiB/s-7121KiB/s (2095kB/s-7292kB/s), io=21.8MiB (22.8MB), run=1001-1001msec 00:18:07.783 00:18:07.783 Disk stats (read/write): 00:18:07.783 nvme0n1: ios=1049/1149, merge=0/0, ticks=1423/304, in_queue=1727, util=97.49% 00:18:07.783 nvme0n2: ios=1313/1536, merge=0/0, ticks=1430/308, in_queue=1738, util=97.66% 00:18:07.783 nvme0n3: ios=170/512, merge=0/0, ticks=718/109, in_queue=827, util=88.88% 00:18:07.783 nvme0n4: ios=1261/1536, merge=0/0, ticks=429/313, in_queue=742, util=89.63% 00:18:07.783 07:05:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:07.783 [global] 00:18:07.783 thread=1 00:18:07.783 invalidate=1 00:18:07.783 rw=randwrite 00:18:07.783 time_based=1 00:18:07.783 runtime=1 00:18:07.783 ioengine=libaio 00:18:07.783 direct=1 00:18:07.783 bs=4096 00:18:07.783 iodepth=1 00:18:07.783 norandommap=0 00:18:07.783 numjobs=1 00:18:07.783 00:18:07.783 verify_dump=1 00:18:07.783 verify_backlog=512 00:18:07.783 verify_state_save=0 00:18:07.783 do_verify=1 00:18:07.783 verify=crc32c-intel 00:18:07.783 [job0] 00:18:07.783 filename=/dev/nvme0n1 00:18:07.783 [job1] 00:18:07.783 filename=/dev/nvme0n2 00:18:07.783 [job2] 00:18:07.783 filename=/dev/nvme0n3 00:18:07.783 [job3] 00:18:07.783 filename=/dev/nvme0n4 00:18:07.783 Could not set queue depth (nvme0n1) 00:18:07.783 Could not set queue depth (nvme0n2) 00:18:07.783 Could not set queue depth (nvme0n3) 00:18:07.783 Could not set queue depth (nvme0n4) 00:18:08.040 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:08.040 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:08.040 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:08.040 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:08.040 fio-3.35 00:18:08.040 Starting 4 threads 00:18:09.411 00:18:09.411 job0: (groupid=0, jobs=1): err= 0: pid=1516529: Sat Jul 13 07:05:38 2024 00:18:09.411 read: IOPS=1470, BW=5882KiB/s (6023kB/s)(5888KiB/1001msec) 00:18:09.411 slat (nsec): min=7053, max=50547, avg=13429.27, stdev=5938.99 00:18:09.411 clat (usec): min=298, max=1178, avg=375.60, stdev=47.35 00:18:09.411 lat (usec): min=306, max=1187, avg=389.03, stdev=49.76 00:18:09.411 clat percentiles (usec): 00:18:09.411 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:18:09.411 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 379], 00:18:09.411 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 429], 95.00th=[ 449], 00:18:09.411 | 99.00th=[ 502], 99.50th=[ 523], 99.90th=[ 799], 99.95th=[ 1172], 00:18:09.411 | 99.99th=[ 1172] 00:18:09.411 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:09.411 slat (nsec): min=8996, max=80294, avg=18924.18, stdev=9559.89 00:18:09.411 clat (usec): min=168, max=804, avg=250.20, stdev=82.10 00:18:09.411 lat (usec): min=179, max=817, avg=269.13, stdev=86.99 00:18:09.411 clat percentiles (usec): 00:18:09.411 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 194], 00:18:09.411 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 227], 60.00th=[ 237], 00:18:09.411 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 383], 95.00th=[ 453], 00:18:09.411 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 717], 99.95th=[ 807], 00:18:09.411 | 99.99th=[ 807] 00:18:09.411 bw ( KiB/s): min= 7800, max= 7800, per=38.70%, avg=7800.00, stdev= 0.00, samples=1 00:18:09.411 iops : min= 1950, max= 1950, avg=1950.00, stdev= 0.00, samples=1 00:18:09.411 lat (usec) : 250=36.07%, 500=62.30%, 750=1.50%, 1000=0.10% 00:18:09.411 lat (msec) : 2=0.03% 00:18:09.411 cpu : usr=3.60%, sys=6.50%, ctx=3010, majf=0, minf=2 00:18:09.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.411 issued rwts: total=1472,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.411 job1: (groupid=0, jobs=1): err= 0: pid=1516532: Sat Jul 13 07:05:38 2024 00:18:09.411 read: IOPS=615, BW=2461KiB/s (2520kB/s)(2500KiB/1016msec) 00:18:09.411 slat (nsec): min=6018, max=65795, avg=19276.17, stdev=9933.39 00:18:09.411 clat (usec): min=264, max=41382, avg=1209.11, stdev=5800.41 00:18:09.411 lat (usec): min=273, max=41399, avg=1228.38, stdev=5800.88 00:18:09.411 clat percentiles (usec): 00:18:09.411 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 306], 00:18:09.411 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 363], 00:18:09.411 | 70.00th=[ 400], 80.00th=[ 457], 90.00th=[ 494], 95.00th=[ 529], 00:18:09.411 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:09.411 | 99.99th=[41157] 00:18:09.411 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:18:09.411 slat (nsec): min=6473, max=58681, avg=14996.14, stdev=5770.86 00:18:09.411 clat (usec): min=175, max=444, avg=219.49, stdev=27.95 00:18:09.411 lat (usec): min=187, max=459, avg=234.49, stdev=27.93 00:18:09.411 clat percentiles (usec): 00:18:09.411 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:18:09.411 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:18:09.411 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 262], 00:18:09.411 | 99.00th=[ 322], 99.50th=[ 363], 99.90th=[ 437], 99.95th=[ 445], 00:18:09.411 | 99.99th=[ 445] 00:18:09.411 bw ( KiB/s): min= 3720, max= 4472, per=20.32%, avg=4096.00, stdev=531.74, samples=2 00:18:09.411 iops : min= 930, max= 1118, avg=1024.00, stdev=132.94, samples=2 00:18:09.411 lat (usec) : 250=56.40%, 500=40.39%, 750=2.43% 00:18:09.411 lat (msec) : 50=0.79% 00:18:09.411 cpu : usr=1.18%, sys=3.05%, ctx=1649, majf=0, minf=1 00:18:09.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.411 issued rwts: total=625,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.411 job2: (groupid=0, jobs=1): err= 0: pid=1516533: Sat Jul 13 07:05:38 2024 00:18:09.411 read: IOPS=1538, BW=6154KiB/s (6302kB/s)(6160KiB/1001msec) 00:18:09.411 slat (nsec): min=5417, max=68714, avg=18014.33, stdev=9352.78 00:18:09.411 clat (usec): min=257, max=619, avg=321.49, stdev=37.33 00:18:09.411 lat (usec): min=268, max=634, avg=339.50, stdev=41.19 00:18:09.411 clat percentiles (usec): 00:18:09.411 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 293], 00:18:09.411 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 322], 00:18:09.411 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 375], 00:18:09.411 | 99.00th=[ 482], 99.50th=[ 523], 99.90th=[ 578], 99.95th=[ 619], 00:18:09.411 | 99.99th=[ 619] 00:18:09.411 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:09.411 slat (nsec): min=6120, max=67851, avg=12549.38, stdev=6106.63 00:18:09.411 clat (usec): min=169, max=476, avg=213.15, stdev=32.34 00:18:09.411 lat (usec): min=177, max=491, avg=225.70, stdev=34.96 00:18:09.411 clat percentiles (usec): 00:18:09.411 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 190], 00:18:09.411 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:18:09.411 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 247], 95.00th=[ 281], 00:18:09.411 | 99.00th=[ 343], 99.50th=[ 392], 99.90th=[ 416], 99.95th=[ 416], 00:18:09.411 | 99.99th=[ 478] 00:18:09.411 bw ( KiB/s): min= 8192, max= 8192, per=40.64%, avg=8192.00, stdev= 0.00, samples=1 00:18:09.411 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:09.411 lat (usec) : 250=52.12%, 500=47.58%, 750=0.31% 00:18:09.411 cpu : usr=2.50%, sys=6.00%, ctx=3589, majf=0, minf=1 00:18:09.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.411 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.411 job3: (groupid=0, jobs=1): err= 0: pid=1516534: Sat Jul 13 07:05:38 2024 00:18:09.411 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:18:09.411 slat (nsec): min=12458, max=46652, avg=21587.05, stdev=8589.18 00:18:09.411 clat (usec): min=377, max=41279, avg=39117.94, stdev=8653.92 00:18:09.411 lat (usec): min=396, max=41299, avg=39139.53, stdev=8654.39 00:18:09.411 clat percentiles (usec): 00:18:09.411 | 1.00th=[ 379], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:09.411 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:09.411 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:09.411 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:09.411 | 99.99th=[41157] 00:18:09.411 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:18:09.411 slat (nsec): min=7242, max=61487, avg=19909.66, stdev=9152.96 00:18:09.411 clat (usec): min=191, max=532, avg=258.58, stdev=49.03 00:18:09.411 lat (usec): min=206, max=572, avg=278.48, stdev=48.88 00:18:09.411 clat percentiles (usec): 00:18:09.411 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:18:09.411 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 249], 00:18:09.411 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 326], 95.00th=[ 375], 00:18:09.411 | 99.00th=[ 416], 99.50th=[ 449], 99.90th=[ 537], 99.95th=[ 537], 00:18:09.411 | 99.99th=[ 537] 00:18:09.411 bw ( KiB/s): min= 4096, max= 4096, per=20.32%, avg=4096.00, stdev= 0.00, samples=1 00:18:09.411 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:09.411 lat (usec) : 250=59.36%, 500=36.52%, 750=0.19% 00:18:09.411 lat (msec) : 50=3.93% 00:18:09.411 cpu : usr=0.60%, sys=1.19%, ctx=535, majf=0, minf=1 00:18:09.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.411 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.411 00:18:09.411 Run status group 0 (all jobs): 00:18:09.411 READ: bw=14.1MiB/s (14.8MB/s), 87.4KiB/s-6154KiB/s (89.5kB/s-6302kB/s), io=14.3MiB (15.0MB), run=1001-1016msec 00:18:09.411 WRITE: bw=19.7MiB/s (20.6MB/s), 2034KiB/s-8184KiB/s (2083kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1016msec 00:18:09.411 00:18:09.411 Disk stats (read/write): 00:18:09.412 nvme0n1: ios=1071/1536, merge=0/0, ticks=1374/368, in_queue=1742, util=97.80% 00:18:09.412 nvme0n2: ios=641/1024, merge=0/0, ticks=583/220, in_queue=803, util=86.99% 00:18:09.412 nvme0n3: ios=1394/1536, merge=0/0, ticks=437/324, in_queue=761, util=88.82% 00:18:09.412 nvme0n4: ios=45/512, merge=0/0, ticks=1643/127, in_queue=1770, util=97.89% 00:18:09.412 07:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:09.412 [global] 00:18:09.412 thread=1 00:18:09.412 invalidate=1 00:18:09.412 rw=write 00:18:09.412 time_based=1 00:18:09.412 runtime=1 00:18:09.412 ioengine=libaio 00:18:09.412 direct=1 00:18:09.412 bs=4096 00:18:09.412 iodepth=128 00:18:09.412 norandommap=0 00:18:09.412 numjobs=1 00:18:09.412 00:18:09.412 verify_dump=1 00:18:09.412 verify_backlog=512 00:18:09.412 verify_state_save=0 00:18:09.412 do_verify=1 00:18:09.412 verify=crc32c-intel 00:18:09.412 [job0] 00:18:09.412 filename=/dev/nvme0n1 00:18:09.412 [job1] 00:18:09.412 filename=/dev/nvme0n2 00:18:09.412 [job2] 00:18:09.412 filename=/dev/nvme0n3 00:18:09.412 [job3] 00:18:09.412 filename=/dev/nvme0n4 00:18:09.412 Could not set queue depth (nvme0n1) 00:18:09.412 Could not set queue depth (nvme0n2) 00:18:09.412 Could not set queue depth (nvme0n3) 00:18:09.412 Could not set queue depth (nvme0n4) 00:18:09.412 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.412 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.412 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.412 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.412 fio-3.35 00:18:09.412 Starting 4 threads 00:18:10.782 00:18:10.782 job0: (groupid=0, jobs=1): err= 0: pid=1516760: Sat Jul 13 07:05:39 2024 00:18:10.782 read: IOPS=5408, BW=21.1MiB/s (22.2MB/s)(21.2MiB/1005msec) 00:18:10.782 slat (usec): min=2, max=12183, avg=84.62, stdev=556.41 00:18:10.782 clat (usec): min=1499, max=40109, avg=11923.23, stdev=4024.03 00:18:10.782 lat (usec): min=5473, max=40116, avg=12007.85, stdev=4058.59 00:18:10.782 clat percentiles (usec): 00:18:10.782 | 1.00th=[ 6390], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[10159], 00:18:10.782 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:18:10.782 | 70.00th=[11338], 80.00th=[12256], 90.00th=[18744], 95.00th=[21890], 00:18:10.782 | 99.00th=[28967], 99.50th=[30278], 99.90th=[30278], 99.95th=[32900], 00:18:10.782 | 99.99th=[40109] 00:18:10.782 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:18:10.782 slat (usec): min=3, max=12447, avg=81.15, stdev=523.60 00:18:10.782 clat (usec): min=1739, max=28077, avg=11134.92, stdev=2869.65 00:18:10.782 lat (usec): min=1762, max=28084, avg=11216.08, stdev=2896.78 00:18:10.782 clat percentiles (usec): 00:18:10.782 | 1.00th=[ 5866], 5.00th=[ 7701], 10.00th=[ 9372], 20.00th=[ 9896], 00:18:10.782 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10945], 00:18:10.782 | 70.00th=[11207], 80.00th=[11600], 90.00th=[13304], 95.00th=[16909], 00:18:10.782 | 99.00th=[23987], 99.50th=[24249], 99.90th=[26870], 99.95th=[26870], 00:18:10.782 | 99.99th=[28181] 00:18:10.782 bw ( KiB/s): min=20576, max=24480, per=33.05%, avg=22528.00, stdev=2760.54, samples=2 00:18:10.782 iops : min= 5144, max= 6120, avg=5632.00, stdev=690.14, samples=2 00:18:10.782 lat (msec) : 2=0.04%, 10=19.91%, 20=74.36%, 50=5.69% 00:18:10.782 cpu : usr=7.87%, sys=12.75%, ctx=354, majf=0, minf=17 00:18:10.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:10.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.782 issued rwts: total=5436,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.782 job1: (groupid=0, jobs=1): err= 0: pid=1516761: Sat Jul 13 07:05:39 2024 00:18:10.782 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:18:10.782 slat (usec): min=3, max=17432, avg=156.19, stdev=1105.51 00:18:10.782 clat (usec): min=6507, max=71891, avg=19092.18, stdev=10968.78 00:18:10.782 lat (usec): min=6519, max=71932, avg=19248.37, stdev=11087.36 00:18:10.782 clat percentiles (usec): 00:18:10.782 | 1.00th=[ 8586], 5.00th=[10683], 10.00th=[11207], 20.00th=[11863], 00:18:10.782 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13566], 60.00th=[15926], 00:18:10.782 | 70.00th=[22414], 80.00th=[23725], 90.00th=[34866], 95.00th=[43779], 00:18:10.782 | 99.00th=[57410], 99.50th=[58983], 99.90th=[60556], 99.95th=[65799], 00:18:10.782 | 99.99th=[71828] 00:18:10.782 write: IOPS=3561, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:18:10.783 slat (usec): min=3, max=32214, avg=133.49, stdev=1056.62 00:18:10.783 clat (usec): min=726, max=66887, avg=19154.97, stdev=10900.26 00:18:10.783 lat (usec): min=2749, max=66903, avg=19288.46, stdev=10963.15 00:18:10.783 clat percentiles (usec): 00:18:10.783 | 1.00th=[ 4686], 5.00th=[ 6652], 10.00th=[ 7635], 20.00th=[ 9765], 00:18:10.783 | 30.00th=[12125], 40.00th=[13829], 50.00th=[16057], 60.00th=[19268], 00:18:10.783 | 70.00th=[22676], 80.00th=[26346], 90.00th=[34866], 95.00th=[41157], 00:18:10.783 | 99.00th=[54264], 99.50th=[54789], 99.90th=[54789], 99.95th=[61080], 00:18:10.783 | 99.99th=[66847] 00:18:10.783 bw ( KiB/s): min=13008, max=14576, per=20.23%, avg=13792.00, stdev=1108.74, samples=2 00:18:10.783 iops : min= 3252, max= 3644, avg=3448.00, stdev=277.19, samples=2 00:18:10.783 lat (usec) : 750=0.02% 00:18:10.783 lat (msec) : 4=0.27%, 10=11.12%, 20=52.29%, 50=34.30%, 100=2.02% 00:18:10.783 cpu : usr=4.09%, sys=8.77%, ctx=282, majf=0, minf=13 00:18:10.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:10.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.783 issued rwts: total=3072,3576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.783 job2: (groupid=0, jobs=1): err= 0: pid=1516762: Sat Jul 13 07:05:39 2024 00:18:10.783 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:18:10.783 slat (usec): min=2, max=35420, avg=149.99, stdev=1260.56 00:18:10.783 clat (msec): min=4, max=108, avg=19.07, stdev=10.70 00:18:10.783 lat (msec): min=4, max=108, avg=19.22, stdev=10.84 00:18:10.783 clat percentiles (msec): 00:18:10.783 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:18:10.783 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 19], 00:18:10.783 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 26], 95.00th=[ 33], 00:18:10.783 | 99.00th=[ 85], 99.50th=[ 85], 99.90th=[ 109], 99.95th=[ 109], 00:18:10.783 | 99.99th=[ 109] 00:18:10.783 write: IOPS=3472, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1003msec); 0 zone resets 00:18:10.783 slat (usec): min=3, max=20517, avg=127.66, stdev=1006.32 00:18:10.783 clat (usec): min=398, max=108468, avg=19710.01, stdev=14988.33 00:18:10.783 lat (msec): min=2, max=108, avg=19.84, stdev=15.03 00:18:10.783 clat percentiles (msec): 00:18:10.783 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 11], 20.00th=[ 13], 00:18:10.783 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 19], 00:18:10.783 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 28], 95.00th=[ 45], 00:18:10.783 | 99.00th=[ 102], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:18:10.783 | 99.99th=[ 109] 00:18:10.783 bw ( KiB/s): min=12288, max=14552, per=19.69%, avg=13420.00, stdev=1600.89, samples=2 00:18:10.783 iops : min= 3072, max= 3638, avg=3355.00, stdev=400.22, samples=2 00:18:10.783 lat (usec) : 500=0.02% 00:18:10.783 lat (msec) : 4=0.59%, 10=5.20%, 20=68.86%, 50=21.68%, 100=2.70% 00:18:10.783 lat (msec) : 250=0.95% 00:18:10.783 cpu : usr=4.59%, sys=5.59%, ctx=212, majf=0, minf=7 00:18:10.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:10.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.783 issued rwts: total=3072,3483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.783 job3: (groupid=0, jobs=1): err= 0: pid=1516763: Sat Jul 13 07:05:39 2024 00:18:10.783 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:18:10.783 slat (usec): min=3, max=17193, avg=124.27, stdev=814.69 00:18:10.783 clat (msec): min=4, max=109, avg=16.89, stdev=14.64 00:18:10.783 lat (msec): min=4, max=109, avg=17.02, stdev=14.71 00:18:10.783 clat percentiles (msec): 00:18:10.783 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:18:10.783 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:18:10.783 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 22], 95.00th=[ 38], 00:18:10.783 | 99.00th=[ 97], 99.50th=[ 110], 99.90th=[ 110], 99.95th=[ 110], 00:18:10.783 | 99.99th=[ 110] 00:18:10.783 write: IOPS=4411, BW=17.2MiB/s (18.1MB/s)(17.3MiB/1005msec); 0 zone resets 00:18:10.783 slat (usec): min=3, max=25382, avg=98.69, stdev=672.65 00:18:10.783 clat (usec): min=1884, max=50007, avg=13143.54, stdev=4283.85 00:18:10.783 lat (usec): min=1906, max=50040, avg=13242.23, stdev=4305.35 00:18:10.783 clat percentiles (usec): 00:18:10.783 | 1.00th=[ 5604], 5.00th=[ 7701], 10.00th=[10683], 20.00th=[11207], 00:18:10.783 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12649], 00:18:10.783 | 70.00th=[13173], 80.00th=[13960], 90.00th=[16712], 95.00th=[22414], 00:18:10.783 | 99.00th=[32637], 99.50th=[32900], 99.90th=[32900], 99.95th=[50070], 00:18:10.783 | 99.99th=[50070] 00:18:10.783 bw ( KiB/s): min=16384, max=18096, per=25.29%, avg=17240.00, stdev=1210.57, samples=2 00:18:10.783 iops : min= 4096, max= 4524, avg=4310.00, stdev=302.64, samples=2 00:18:10.783 lat (msec) : 2=0.02%, 4=0.14%, 10=7.73%, 20=82.93%, 50=7.14% 00:18:10.783 lat (msec) : 100=1.75%, 250=0.29% 00:18:10.783 cpu : usr=6.97%, sys=8.96%, ctx=395, majf=0, minf=15 00:18:10.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:10.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.783 issued rwts: total=4096,4434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.783 00:18:10.783 Run status group 0 (all jobs): 00:18:10.783 READ: bw=60.9MiB/s (63.9MB/s), 12.0MiB/s-21.1MiB/s (12.5MB/s-22.2MB/s), io=61.2MiB (64.2MB), run=1003-1005msec 00:18:10.783 WRITE: bw=66.6MiB/s (69.8MB/s), 13.6MiB/s-21.9MiB/s (14.2MB/s-23.0MB/s), io=66.9MiB (70.1MB), run=1003-1005msec 00:18:10.783 00:18:10.783 Disk stats (read/write): 00:18:10.783 nvme0n1: ios=4658/4772, merge=0/0, ticks=31811/28059, in_queue=59870, util=85.67% 00:18:10.783 nvme0n2: ios=2664/3072, merge=0/0, ticks=37023/45389, in_queue=82412, util=86.59% 00:18:10.783 nvme0n3: ios=2560/2765, merge=0/0, ticks=34387/33555, in_queue=67942, util=88.70% 00:18:10.783 nvme0n4: ios=3490/3584, merge=0/0, ticks=25011/24603, in_queue=49614, util=88.29% 00:18:10.783 07:05:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:10.783 [global] 00:18:10.783 thread=1 00:18:10.783 invalidate=1 00:18:10.783 rw=randwrite 00:18:10.783 time_based=1 00:18:10.783 runtime=1 00:18:10.783 ioengine=libaio 00:18:10.783 direct=1 00:18:10.783 bs=4096 00:18:10.783 iodepth=128 00:18:10.783 norandommap=0 00:18:10.783 numjobs=1 00:18:10.783 00:18:10.783 verify_dump=1 00:18:10.783 verify_backlog=512 00:18:10.783 verify_state_save=0 00:18:10.783 do_verify=1 00:18:10.783 verify=crc32c-intel 00:18:10.783 [job0] 00:18:10.783 filename=/dev/nvme0n1 00:18:10.783 [job1] 00:18:10.783 filename=/dev/nvme0n2 00:18:10.783 [job2] 00:18:10.783 filename=/dev/nvme0n3 00:18:10.783 [job3] 00:18:10.783 filename=/dev/nvme0n4 00:18:10.783 Could not set queue depth (nvme0n1) 00:18:10.783 Could not set queue depth (nvme0n2) 00:18:10.783 Could not set queue depth (nvme0n3) 00:18:10.783 Could not set queue depth (nvme0n4) 00:18:10.783 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:10.783 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:10.783 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:10.783 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:10.783 fio-3.35 00:18:10.783 Starting 4 threads 00:18:12.153 00:18:12.153 job0: (groupid=0, jobs=1): err= 0: pid=1517099: Sat Jul 13 07:05:41 2024 00:18:12.153 read: IOPS=2890, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1003msec) 00:18:12.153 slat (usec): min=3, max=30713, avg=169.16, stdev=1123.56 00:18:12.153 clat (usec): min=667, max=77369, avg=22648.91, stdev=13389.71 00:18:12.153 lat (usec): min=6788, max=77408, avg=22818.07, stdev=13484.15 00:18:12.153 clat percentiles (usec): 00:18:12.153 | 1.00th=[ 7046], 5.00th=[10028], 10.00th=[10945], 20.00th=[11863], 00:18:12.153 | 30.00th=[12518], 40.00th=[13960], 50.00th=[16057], 60.00th=[24249], 00:18:12.153 | 70.00th=[28705], 80.00th=[31851], 90.00th=[41681], 95.00th=[50070], 00:18:12.153 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65274], 99.95th=[69731], 00:18:12.153 | 99.99th=[77071] 00:18:12.153 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:18:12.153 slat (usec): min=4, max=14594, avg=155.11, stdev=865.54 00:18:12.153 clat (msec): min=8, max=119, avg=19.83, stdev=19.85 00:18:12.153 lat (msec): min=8, max=119, avg=19.98, stdev=19.99 00:18:12.153 clat percentiles (msec): 00:18:12.153 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:18:12.153 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 15], 00:18:12.153 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 29], 95.00th=[ 67], 00:18:12.153 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 120], 99.95th=[ 120], 00:18:12.153 | 99.99th=[ 120] 00:18:12.153 bw ( KiB/s): min=12016, max=12560, per=19.51%, avg=12288.00, stdev=384.67, samples=2 00:18:12.153 iops : min= 3004, max= 3140, avg=3072.00, stdev=96.17, samples=2 00:18:12.153 lat (usec) : 750=0.02% 00:18:12.153 lat (msec) : 10=3.18%, 20=62.80%, 50=27.94%, 100=4.79%, 250=1.27% 00:18:12.153 cpu : usr=4.79%, sys=6.39%, ctx=307, majf=0, minf=1 00:18:12.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:12.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.153 issued rwts: total=2899,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.153 job1: (groupid=0, jobs=1): err= 0: pid=1517114: Sat Jul 13 07:05:41 2024 00:18:12.153 read: IOPS=4522, BW=17.7MiB/s (18.5MB/s)(18.0MiB/1019msec) 00:18:12.153 slat (usec): min=2, max=13232, avg=101.23, stdev=693.13 00:18:12.153 clat (usec): min=1770, max=39489, avg=13524.19, stdev=5108.68 00:18:12.153 lat (usec): min=1781, max=39498, avg=13625.42, stdev=5138.96 00:18:12.153 clat percentiles (usec): 00:18:12.153 | 1.00th=[ 3163], 5.00th=[ 6718], 10.00th=[ 8455], 20.00th=[10945], 00:18:12.153 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13042], 60.00th=[13566], 00:18:12.153 | 70.00th=[13829], 80.00th=[15008], 90.00th=[18744], 95.00th=[24773], 00:18:12.153 | 99.00th=[32637], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:18:12.153 | 99.99th=[39584] 00:18:12.153 write: IOPS=4793, BW=18.7MiB/s (19.6MB/s)(19.1MiB/1019msec); 0 zone resets 00:18:12.153 slat (usec): min=3, max=18148, avg=97.43, stdev=617.95 00:18:12.153 clat (usec): min=382, max=53715, avg=13650.29, stdev=7734.87 00:18:12.153 lat (usec): min=426, max=53727, avg=13747.71, stdev=7768.35 00:18:12.153 clat percentiles (usec): 00:18:12.153 | 1.00th=[ 1074], 5.00th=[ 3687], 10.00th=[ 5932], 20.00th=[ 9110], 00:18:12.153 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:18:12.153 | 70.00th=[13566], 80.00th=[16909], 90.00th=[24249], 95.00th=[30278], 00:18:12.153 | 99.00th=[46400], 99.50th=[50070], 99.90th=[53740], 99.95th=[53740], 00:18:12.153 | 99.99th=[53740] 00:18:12.153 bw ( KiB/s): min=17536, max=20528, per=30.22%, avg=19032.00, stdev=2115.66, samples=2 00:18:12.153 iops : min= 4384, max= 5132, avg=4758.00, stdev=528.92, samples=2 00:18:12.153 lat (usec) : 500=0.02%, 750=0.25%, 1000=0.16% 00:18:12.153 lat (msec) : 2=0.65%, 4=2.71%, 10=16.28%, 20=68.61%, 50=11.06% 00:18:12.153 lat (msec) : 100=0.26% 00:18:12.153 cpu : usr=4.52%, sys=6.09%, ctx=390, majf=0, minf=1 00:18:12.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:12.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.153 issued rwts: total=4608,4885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.153 job2: (groupid=0, jobs=1): err= 0: pid=1517115: Sat Jul 13 07:05:41 2024 00:18:12.153 read: IOPS=4027, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1017msec) 00:18:12.153 slat (usec): min=2, max=12757, avg=110.22, stdev=717.98 00:18:12.153 clat (usec): min=6116, max=45192, avg=14693.01, stdev=5676.34 00:18:12.153 lat (usec): min=6121, max=45202, avg=14803.23, stdev=5721.42 00:18:12.153 clat percentiles (usec): 00:18:12.153 | 1.00th=[ 6521], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[11076], 00:18:12.153 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13304], 60.00th=[14615], 00:18:12.153 | 70.00th=[15664], 80.00th=[16909], 90.00th=[19792], 95.00th=[23200], 00:18:12.153 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:18:12.153 | 99.99th=[45351] 00:18:12.153 write: IOPS=4429, BW=17.3MiB/s (18.1MB/s)(17.6MiB/1017msec); 0 zone resets 00:18:12.153 slat (usec): min=3, max=19246, avg=110.80, stdev=741.28 00:18:12.153 clat (usec): min=2205, max=44559, avg=15092.87, stdev=6321.74 00:18:12.153 lat (usec): min=2211, max=44566, avg=15203.67, stdev=6365.07 00:18:12.153 clat percentiles (usec): 00:18:12.153 | 1.00th=[ 6194], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10552], 00:18:12.153 | 30.00th=[11338], 40.00th=[12649], 50.00th=[13173], 60.00th=[14353], 00:18:12.153 | 70.00th=[15008], 80.00th=[19268], 90.00th=[25560], 95.00th=[28705], 00:18:12.153 | 99.00th=[34866], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:18:12.153 | 99.99th=[44303] 00:18:12.153 bw ( KiB/s): min=17512, max=17512, per=27.80%, avg=17512.00, stdev= 0.00, samples=2 00:18:12.153 iops : min= 4378, max= 4378, avg=4378.00, stdev= 0.00, samples=2 00:18:12.153 lat (msec) : 4=0.21%, 10=12.29%, 20=73.07%, 50=14.43% 00:18:12.153 cpu : usr=3.05%, sys=6.99%, ctx=398, majf=0, minf=1 00:18:12.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:12.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.153 issued rwts: total=4096,4505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.153 job3: (groupid=0, jobs=1): err= 0: pid=1517116: Sat Jul 13 07:05:41 2024 00:18:12.153 read: IOPS=3144, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1012msec) 00:18:12.153 slat (usec): min=2, max=20007, avg=155.44, stdev=1201.30 00:18:12.153 clat (usec): min=2021, max=75941, avg=18599.24, stdev=11331.07 00:18:12.153 lat (usec): min=2305, max=75974, avg=18754.68, stdev=11427.78 00:18:12.153 clat percentiles (usec): 00:18:12.153 | 1.00th=[ 5538], 5.00th=[ 8848], 10.00th=[10552], 20.00th=[12125], 00:18:12.153 | 30.00th=[12649], 40.00th=[13304], 50.00th=[14353], 60.00th=[16188], 00:18:12.153 | 70.00th=[18220], 80.00th=[24511], 90.00th=[31327], 95.00th=[41157], 00:18:12.153 | 99.00th=[61604], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:18:12.153 | 99.99th=[76022] 00:18:12.153 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:18:12.153 slat (usec): min=3, max=16076, avg=130.68, stdev=836.84 00:18:12.153 clat (usec): min=1200, max=136424, avg=19278.01, stdev=21419.40 00:18:12.153 lat (usec): min=1224, max=136451, avg=19408.69, stdev=21541.49 00:18:12.153 clat percentiles (msec): 00:18:12.153 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:18:12.153 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:18:12.153 | 70.00th=[ 15], 80.00th=[ 20], 90.00th=[ 29], 95.00th=[ 63], 00:18:12.153 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 138], 99.95th=[ 138], 00:18:12.153 | 99.99th=[ 138] 00:18:12.154 bw ( KiB/s): min= 9304, max=19232, per=22.65%, avg=14268.00, stdev=7020.16, samples=2 00:18:12.154 iops : min= 2326, max= 4808, avg=3567.00, stdev=1755.04, samples=2 00:18:12.154 lat (msec) : 2=0.04%, 4=0.40%, 10=12.62%, 20=65.06%, 50=16.66% 00:18:12.154 lat (msec) : 100=3.71%, 250=1.51% 00:18:12.154 cpu : usr=4.06%, sys=6.63%, ctx=345, majf=0, minf=1 00:18:12.154 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:12.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.154 issued rwts: total=3182,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.154 00:18:12.154 Run status group 0 (all jobs): 00:18:12.154 READ: bw=56.7MiB/s (59.4MB/s), 11.3MiB/s-17.7MiB/s (11.8MB/s-18.5MB/s), io=57.8MiB (60.6MB), run=1003-1019msec 00:18:12.154 WRITE: bw=61.5MiB/s (64.5MB/s), 12.0MiB/s-18.7MiB/s (12.5MB/s-19.6MB/s), io=62.7MiB (65.7MB), run=1003-1019msec 00:18:12.154 00:18:12.154 Disk stats (read/write): 00:18:12.154 nvme0n1: ios=2322/2560, merge=0/0, ticks=17062/17216, in_queue=34278, util=94.99% 00:18:12.154 nvme0n2: ios=3864/4096, merge=0/0, ticks=36688/38174, in_queue=74862, util=87.26% 00:18:12.154 nvme0n3: ios=3584/3655, merge=0/0, ticks=25438/24488, in_queue=49926, util=87.95% 00:18:12.154 nvme0n4: ios=3129/3215, merge=0/0, ticks=42479/31189, in_queue=73668, util=98.73% 00:18:12.154 07:05:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:12.154 07:05:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1517248 00:18:12.154 07:05:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:12.154 07:05:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:12.154 [global] 00:18:12.154 thread=1 00:18:12.154 invalidate=1 00:18:12.154 rw=read 00:18:12.154 time_based=1 00:18:12.154 runtime=10 00:18:12.154 ioengine=libaio 00:18:12.154 direct=1 00:18:12.154 bs=4096 00:18:12.154 iodepth=1 00:18:12.154 norandommap=1 00:18:12.154 numjobs=1 00:18:12.154 00:18:12.154 [job0] 00:18:12.154 filename=/dev/nvme0n1 00:18:12.154 [job1] 00:18:12.154 filename=/dev/nvme0n2 00:18:12.154 [job2] 00:18:12.154 filename=/dev/nvme0n3 00:18:12.154 [job3] 00:18:12.154 filename=/dev/nvme0n4 00:18:12.154 Could not set queue depth (nvme0n1) 00:18:12.154 Could not set queue depth (nvme0n2) 00:18:12.154 Could not set queue depth (nvme0n3) 00:18:12.154 Could not set queue depth (nvme0n4) 00:18:12.410 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.410 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.410 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.410 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.410 fio-3.35 00:18:12.410 Starting 4 threads 00:18:14.931 07:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:15.500 07:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:15.500 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6012928, buflen=4096 00:18:15.500 fio: pid=1517343, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:15.500 07:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.500 07:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:15.500 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=28069888, buflen=4096 00:18:15.500 fio: pid=1517342, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:15.758 07:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.758 07:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:15.758 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=15220736, buflen=4096 00:18:15.758 fio: pid=1517340, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:16.016 07:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:16.016 07:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:16.275 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=30543872, buflen=4096 00:18:16.275 fio: pid=1517341, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:16.275 00:18:16.275 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1517340: Sat Jul 13 07:05:45 2024 00:18:16.275 read: IOPS=1078, BW=4312KiB/s (4416kB/s)(14.5MiB/3447msec) 00:18:16.275 slat (usec): min=5, max=30627, avg=22.64, stdev=567.20 00:18:16.275 clat (usec): min=244, max=46914, avg=896.73, stdev=4788.68 00:18:16.275 lat (usec): min=250, max=46927, avg=919.37, stdev=4822.21 00:18:16.275 clat percentiles (usec): 00:18:16.275 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 302], 00:18:16.275 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:18:16.275 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 359], 95.00th=[ 445], 00:18:16.275 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:18:16.275 | 99.99th=[46924] 00:18:16.275 bw ( KiB/s): min= 104, max=11952, per=17.92%, avg=3752.00, stdev=4334.93, samples=6 00:18:16.275 iops : min= 26, max= 2988, avg=938.00, stdev=1083.73, samples=6 00:18:16.275 lat (usec) : 250=0.22%, 500=96.99%, 750=1.18%, 1000=0.13% 00:18:16.275 lat (msec) : 2=0.05%, 50=1.40% 00:18:16.275 cpu : usr=0.58%, sys=1.45%, ctx=3720, majf=0, minf=1 00:18:16.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.275 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.275 issued rwts: total=3717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.275 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1517341: Sat Jul 13 07:05:45 2024 00:18:16.275 read: IOPS=2002, BW=8008KiB/s (8200kB/s)(29.1MiB/3725msec) 00:18:16.275 slat (usec): min=4, max=18905, avg=17.49, stdev=278.15 00:18:16.275 clat (usec): min=254, max=41940, avg=475.87, stdev=2207.25 00:18:16.275 lat (usec): min=260, max=60002, avg=492.49, stdev=2292.98 00:18:16.275 clat percentiles (usec): 00:18:16.275 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 318], 00:18:16.275 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 359], 00:18:16.275 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 400], 95.00th=[ 441], 00:18:16.275 | 99.00th=[ 594], 99.50th=[ 668], 99.90th=[41157], 99.95th=[41157], 00:18:16.275 | 99.99th=[41681] 00:18:16.275 bw ( KiB/s): min= 2980, max=10216, per=40.29%, avg=8434.86, stdev=2480.21, samples=7 00:18:16.275 iops : min= 745, max= 2554, avg=2108.71, stdev=620.05, samples=7 00:18:16.275 lat (usec) : 500=97.43%, 750=2.11%, 1000=0.11% 00:18:16.275 lat (msec) : 2=0.04%, 10=0.01%, 50=0.29% 00:18:16.275 cpu : usr=2.04%, sys=3.30%, ctx=7461, majf=0, minf=1 00:18:16.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.275 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.275 issued rwts: total=7458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.275 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1517342: Sat Jul 13 07:05:45 2024 00:18:16.275 read: IOPS=2153, BW=8615KiB/s (8821kB/s)(26.8MiB/3182msec) 00:18:16.275 slat (usec): min=5, max=12912, avg=12.89, stdev=155.94 00:18:16.275 clat (usec): min=259, max=41336, avg=445.38, stdev=2080.71 00:18:16.275 lat (usec): min=266, max=53988, avg=458.27, stdev=2123.01 00:18:16.275 clat percentiles (usec): 00:18:16.275 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 297], 00:18:16.275 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 338], 00:18:16.275 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 424], 00:18:16.275 | 99.00th=[ 611], 99.50th=[ 676], 99.90th=[41157], 99.95th=[41157], 00:18:16.275 | 99.99th=[41157] 00:18:16.275 bw ( KiB/s): min= 7040, max=10968, per=43.62%, avg=9132.00, stdev=1595.33, samples=6 00:18:16.275 iops : min= 1760, max= 2742, avg=2283.00, stdev=398.83, samples=6 00:18:16.275 lat (usec) : 500=97.33%, 750=2.32%, 1000=0.06% 00:18:16.275 lat (msec) : 10=0.01%, 50=0.26% 00:18:16.275 cpu : usr=1.60%, sys=3.68%, ctx=6855, majf=0, minf=1 00:18:16.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.275 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.275 issued rwts: total=6854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.275 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1517343: Sat Jul 13 07:05:45 2024 00:18:16.275 read: IOPS=499, BW=1997KiB/s (2045kB/s)(5872KiB/2941msec) 00:18:16.275 slat (nsec): min=6388, max=58256, avg=11576.65, stdev=6173.22 00:18:16.275 clat (usec): min=276, max=41299, avg=1973.71, stdev=7871.56 00:18:16.275 lat (usec): min=284, max=41332, avg=1985.28, stdev=7873.17 00:18:16.275 clat percentiles (usec): 00:18:16.275 | 1.00th=[ 285], 5.00th=[ 310], 10.00th=[ 322], 20.00th=[ 338], 00:18:16.275 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:18:16.275 | 70.00th=[ 383], 80.00th=[ 412], 90.00th=[ 502], 95.00th=[ 586], 00:18:16.275 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:16.275 | 99.99th=[41157] 00:18:16.275 bw ( KiB/s): min= 96, max= 3224, per=3.55%, avg=744.00, stdev=1386.93, samples=5 00:18:16.275 iops : min= 24, max= 806, avg=186.00, stdev=346.73, samples=5 00:18:16.275 lat (usec) : 500=89.52%, 750=6.13%, 1000=0.27% 00:18:16.275 lat (msec) : 2=0.07%, 50=3.95% 00:18:16.275 cpu : usr=0.34%, sys=0.88%, ctx=1471, majf=0, minf=1 00:18:16.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.275 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.275 issued rwts: total=1469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.275 00:18:16.275 Run status group 0 (all jobs): 00:18:16.275 READ: bw=20.4MiB/s (21.4MB/s), 1997KiB/s-8615KiB/s (2045kB/s-8821kB/s), io=76.1MiB (79.8MB), run=2941-3725msec 00:18:16.275 00:18:16.275 Disk stats (read/write): 00:18:16.275 nvme0n1: ios=3714/0, merge=0/0, ticks=3203/0, in_queue=3203, util=94.51% 00:18:16.275 nvme0n2: ios=7455/0, merge=0/0, ticks=3369/0, in_queue=3369, util=95.87% 00:18:16.275 nvme0n3: ios=6851/0, merge=0/0, ticks=2888/0, in_queue=2888, util=96.42% 00:18:16.275 nvme0n4: ios=1461/0, merge=0/0, ticks=3264/0, in_queue=3264, util=99.80% 00:18:16.275 07:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:16.275 07:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:16.533 07:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:16.533 07:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:16.791 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:16.791 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:17.049 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:17.049 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:17.307 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:17.307 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1517248 00:18:17.307 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:17.307 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.565 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:17.565 07:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:18:17.565 07:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:17.565 07:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.565 07:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:17.565 07:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.565 07:05:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:18:17.565 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:17.565 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:17.565 nvmf hotplug test: fio failed as expected 00:18:17.565 07:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.824 rmmod nvme_tcp 00:18:17.824 rmmod nvme_fabrics 00:18:17.824 rmmod nvme_keyring 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1515226 ']' 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1515226 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1515226 ']' 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1515226 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1515226 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1515226' 00:18:17.824 killing process with pid 1515226 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1515226 00:18:17.824 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1515226 00:18:18.083 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:18.083 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:18.083 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:18.083 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.083 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:18.083 07:05:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.083 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.083 07:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.617 07:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:20.617 00:18:20.617 real 0m23.125s 00:18:20.617 user 1m20.974s 00:18:20.617 sys 0m6.914s 00:18:20.617 07:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.617 07:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.617 ************************************ 00:18:20.617 END TEST nvmf_fio_target 00:18:20.617 ************************************ 00:18:20.617 07:05:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:20.617 07:05:49 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:20.617 07:05:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:20.617 07:05:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.617 07:05:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:20.617 ************************************ 00:18:20.617 START TEST nvmf_bdevio 00:18:20.617 ************************************ 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:20.617 * Looking for test storage... 00:18:20.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.617 07:05:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:22.519 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:22.519 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:22.520 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:22.520 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:22.520 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:22.520 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:22.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:18:22.520 00:18:22.520 --- 10.0.0.2 ping statistics --- 00:18:22.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.520 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:18:22.520 00:18:22.520 --- 10.0.0.1 ping statistics --- 00:18:22.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.520 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1519960 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1519960 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1519960 ']' 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.520 07:05:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:22.520 [2024-07-13 07:05:51.756512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:18:22.520 [2024-07-13 07:05:51.756589] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.520 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.520 [2024-07-13 07:05:51.793149] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:22.520 [2024-07-13 07:05:51.822412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:22.521 [2024-07-13 07:05:51.912275] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.521 [2024-07-13 07:05:51.912338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.521 [2024-07-13 07:05:51.912355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.521 [2024-07-13 07:05:51.912369] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.521 [2024-07-13 07:05:51.912380] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.521 [2024-07-13 07:05:51.912498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:22.521 [2024-07-13 07:05:51.912583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:22.521 [2024-07-13 07:05:51.913110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:22.521 [2024-07-13 07:05:51.913116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:22.780 [2024-07-13 07:05:52.072819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:22.780 Malloc0 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:22.780 [2024-07-13 07:05:52.126535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:22.780 { 00:18:22.780 "params": { 00:18:22.780 "name": "Nvme$subsystem", 00:18:22.780 "trtype": "$TEST_TRANSPORT", 00:18:22.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.780 "adrfam": "ipv4", 00:18:22.780 "trsvcid": "$NVMF_PORT", 00:18:22.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.780 "hdgst": ${hdgst:-false}, 00:18:22.780 "ddgst": ${ddgst:-false} 00:18:22.780 }, 00:18:22.780 "method": "bdev_nvme_attach_controller" 00:18:22.780 } 00:18:22.780 EOF 00:18:22.780 )") 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:22.780 07:05:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:22.780 "params": { 00:18:22.780 "name": "Nvme1", 00:18:22.780 "trtype": "tcp", 00:18:22.780 "traddr": "10.0.0.2", 00:18:22.780 "adrfam": "ipv4", 00:18:22.780 "trsvcid": "4420", 00:18:22.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.780 "hdgst": false, 00:18:22.780 "ddgst": false 00:18:22.780 }, 00:18:22.780 "method": "bdev_nvme_attach_controller" 00:18:22.780 }' 00:18:22.780 [2024-07-13 07:05:52.175490] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:18:22.780 [2024-07-13 07:05:52.175556] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519990 ] 00:18:22.780 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.780 [2024-07-13 07:05:52.207769] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:23.038 [2024-07-13 07:05:52.237320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:23.038 [2024-07-13 07:05:52.327263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.038 [2024-07-13 07:05:52.327315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.038 [2024-07-13 07:05:52.327318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.296 I/O targets: 00:18:23.296 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:23.296 00:18:23.296 00:18:23.296 CUnit - A unit testing framework for C - Version 2.1-3 00:18:23.296 http://cunit.sourceforge.net/ 00:18:23.296 00:18:23.296 00:18:23.296 Suite: bdevio tests on: Nvme1n1 00:18:23.296 Test: blockdev write read block ...passed 00:18:23.296 Test: blockdev write zeroes read block ...passed 00:18:23.296 Test: blockdev write zeroes read no split ...passed 00:18:23.296 Test: blockdev write zeroes read split ...passed 00:18:23.296 Test: blockdev write zeroes read split partial ...passed 00:18:23.296 Test: blockdev reset ...[2024-07-13 07:05:52.716761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:23.296 [2024-07-13 07:05:52.716893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b43940 (9): Bad file descriptor 00:18:23.296 [2024-07-13 07:05:52.728329] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:23.296 passed 00:18:23.563 Test: blockdev write read 8 blocks ...passed 00:18:23.563 Test: blockdev write read size > 128k ...passed 00:18:23.563 Test: blockdev write read invalid size ...passed 00:18:23.563 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:23.563 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:23.563 Test: blockdev write read max offset ...passed 00:18:23.563 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:23.563 Test: blockdev writev readv 8 blocks ...passed 00:18:23.563 Test: blockdev writev readv 30 x 1block ...passed 00:18:23.563 Test: blockdev writev readv block ...passed 00:18:23.563 Test: blockdev writev readv size > 128k ...passed 00:18:23.563 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:23.563 Test: blockdev comparev and writev ...[2024-07-13 07:05:52.941830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.563 [2024-07-13 07:05:52.941870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.563 [2024-07-13 07:05:52.941896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.563 [2024-07-13 07:05:52.941913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.563 [2024-07-13 07:05:52.942314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.563 [2024-07-13 07:05:52.942338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:23.563 [2024-07-13 07:05:52.942359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.563 [2024-07-13 07:05:52.942375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:23.563 [2024-07-13 07:05:52.942758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.563 [2024-07-13 07:05:52.942781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:23.563 [2024-07-13 07:05:52.942802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.563 [2024-07-13 07:05:52.942817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:23.563 [2024-07-13 07:05:52.943221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.563 [2024-07-13 07:05:52.943244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:23.563 [2024-07-13 07:05:52.943265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.563 [2024-07-13 07:05:52.943280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:23.563 passed 00:18:23.821 Test: blockdev nvme passthru rw ...passed 00:18:23.821 Test: blockdev nvme passthru vendor specific ...[2024-07-13 07:05:53.025207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:23.821 [2024-07-13 07:05:53.025234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:23.821 [2024-07-13 07:05:53.025406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:23.821 [2024-07-13 07:05:53.025429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:23.821 [2024-07-13 07:05:53.025597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:23.821 [2024-07-13 07:05:53.025620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:23.821 [2024-07-13 07:05:53.025790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:23.821 [2024-07-13 07:05:53.025812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:23.821 passed 00:18:23.821 Test: blockdev nvme admin passthru ...passed 00:18:23.822 Test: blockdev copy ...passed 00:18:23.822 00:18:23.822 Run Summary: Type Total Ran Passed Failed Inactive 00:18:23.822 suites 1 1 n/a 0 0 00:18:23.822 tests 23 23 23 0 0 00:18:23.822 asserts 152 152 152 0 n/a 00:18:23.822 00:18:23.822 Elapsed time = 1.140 seconds 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.822 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.822 rmmod nvme_tcp 00:18:24.079 rmmod nvme_fabrics 00:18:24.079 rmmod nvme_keyring 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1519960 ']' 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1519960 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1519960 ']' 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1519960 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1519960 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1519960' 00:18:24.079 killing process with pid 1519960 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1519960 00:18:24.079 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1519960 00:18:24.337 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.338 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.338 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.338 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.338 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.338 07:05:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.338 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.338 07:05:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.240 07:05:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.240 00:18:26.240 real 0m6.075s 00:18:26.240 user 0m9.263s 00:18:26.240 sys 0m2.002s 00:18:26.240 07:05:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.240 07:05:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:26.240 ************************************ 00:18:26.240 END TEST nvmf_bdevio 00:18:26.240 ************************************ 00:18:26.240 07:05:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:26.240 07:05:55 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:26.240 07:05:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:26.240 07:05:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.240 07:05:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:26.240 ************************************ 00:18:26.240 START TEST nvmf_auth_target 00:18:26.240 ************************************ 00:18:26.240 07:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:26.500 * Looking for test storage... 00:18:26.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:26.500 07:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:26.501 07:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:28.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.402 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:28.403 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:28.403 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:28.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.403 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:28.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:18:28.661 00:18:28.661 --- 10.0.0.2 ping statistics --- 00:18:28.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.661 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:18:28.661 00:18:28.661 --- 10.0.0.1 ping statistics --- 00:18:28.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.661 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1522052 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1522052 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1522052 ']' 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.661 07:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1522195 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=52eddec6184db22cac453c6a6f4349255723465884990dbe 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1Yz 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 52eddec6184db22cac453c6a6f4349255723465884990dbe 0 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 52eddec6184db22cac453c6a6f4349255723465884990dbe 0 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=52eddec6184db22cac453c6a6f4349255723465884990dbe 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1Yz 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1Yz 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.1Yz 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8ea7bfdd4c0748fd0f46f5a65c7119b615e91d796c73f3f017f941fdeb67bbf9 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.q9C 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8ea7bfdd4c0748fd0f46f5a65c7119b615e91d796c73f3f017f941fdeb67bbf9 3 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8ea7bfdd4c0748fd0f46f5a65c7119b615e91d796c73f3f017f941fdeb67bbf9 3 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8ea7bfdd4c0748fd0f46f5a65c7119b615e91d796c73f3f017f941fdeb67bbf9 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.q9C 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.q9C 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.q9C 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6c123322a4203abe409578f9bd0868ef 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ap4 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6c123322a4203abe409578f9bd0868ef 1 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6c123322a4203abe409578f9bd0868ef 1 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6c123322a4203abe409578f9bd0868ef 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ap4 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ap4 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Ap4 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=802e9f6de237a93e30a25c7c8df81acf8f9b3988677764b6 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.X9N 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 802e9f6de237a93e30a25c7c8df81acf8f9b3988677764b6 2 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 802e9f6de237a93e30a25c7c8df81acf8f9b3988677764b6 2 00:18:28.920 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:28.921 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:28.921 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=802e9f6de237a93e30a25c7c8df81acf8f9b3988677764b6 00:18:28.921 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:28.921 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.X9N 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.X9N 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.X9N 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c0e690cf9a87c6619776249cf24d85ab0b0025cd9bf01de6 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xWb 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c0e690cf9a87c6619776249cf24d85ab0b0025cd9bf01de6 2 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c0e690cf9a87c6619776249cf24d85ab0b0025cd9bf01de6 2 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c0e690cf9a87c6619776249cf24d85ab0b0025cd9bf01de6 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xWb 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xWb 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.xWb 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e82132b0cbec667f066d4db5dc7f988a 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:29.179 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZmH 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e82132b0cbec667f066d4db5dc7f988a 1 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e82132b0cbec667f066d4db5dc7f988a 1 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e82132b0cbec667f066d4db5dc7f988a 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZmH 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZmH 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.ZmH 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=14dbac54e43b5e8afec09fe52e9940b46fd34b4bd9a9d38023b0acde9b144bba 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qfZ 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 14dbac54e43b5e8afec09fe52e9940b46fd34b4bd9a9d38023b0acde9b144bba 3 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 14dbac54e43b5e8afec09fe52e9940b46fd34b4bd9a9d38023b0acde9b144bba 3 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=14dbac54e43b5e8afec09fe52e9940b46fd34b4bd9a9d38023b0acde9b144bba 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qfZ 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qfZ 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.qfZ 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1522052 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1522052 ']' 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.180 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.440 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.440 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:29.440 07:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1522195 /var/tmp/host.sock 00:18:29.440 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1522195 ']' 00:18:29.440 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:29.440 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.440 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:29.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:29.440 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.440 07:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1Yz 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.705 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.706 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1Yz 00:18:29.706 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1Yz 00:18:29.963 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.q9C ]] 00:18:29.963 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q9C 00:18:29.963 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.963 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.963 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.963 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q9C 00:18:29.963 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.q9C 00:18:30.221 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:30.221 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Ap4 00:18:30.221 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.221 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.221 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.221 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Ap4 00:18:30.221 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Ap4 00:18:30.478 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.X9N ]] 00:18:30.478 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.X9N 00:18:30.478 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.478 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.478 07:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.478 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.X9N 00:18:30.478 07:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.X9N 00:18:30.736 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:30.736 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xWb 00:18:30.736 07:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.736 07:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.736 07:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.736 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xWb 00:18:30.736 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xWb 00:18:30.993 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.ZmH ]] 00:18:30.993 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZmH 00:18:30.993 07:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.993 07:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.993 07:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.993 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZmH 00:18:30.993 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZmH 00:18:31.250 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:31.250 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qfZ 00:18:31.250 07:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.250 07:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.250 07:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.250 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qfZ 00:18:31.250 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qfZ 00:18:31.506 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:31.506 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:31.506 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.506 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.506 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:31.506 07:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.763 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.327 00:18:32.327 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.327 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.327 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.327 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.327 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.327 07:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.327 07:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.327 07:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.327 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.327 { 00:18:32.327 "cntlid": 1, 00:18:32.327 "qid": 0, 00:18:32.327 "state": "enabled", 00:18:32.327 "thread": "nvmf_tgt_poll_group_000", 00:18:32.327 "listen_address": { 00:18:32.327 "trtype": "TCP", 00:18:32.327 "adrfam": "IPv4", 00:18:32.327 "traddr": "10.0.0.2", 00:18:32.327 "trsvcid": "4420" 00:18:32.327 }, 00:18:32.327 "peer_address": { 00:18:32.327 "trtype": "TCP", 00:18:32.327 "adrfam": "IPv4", 00:18:32.327 "traddr": "10.0.0.1", 00:18:32.327 "trsvcid": "54776" 00:18:32.327 }, 00:18:32.327 "auth": { 00:18:32.327 "state": "completed", 00:18:32.327 "digest": "sha256", 00:18:32.327 "dhgroup": "null" 00:18:32.327 } 00:18:32.327 } 00:18:32.327 ]' 00:18:32.327 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.584 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.584 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.584 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:32.584 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.584 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.584 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.584 07:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.839 07:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:18:33.766 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.766 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.766 07:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.766 07:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.766 07:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.766 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.766 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:33.766 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.023 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.280 00:18:34.280 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.280 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.280 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.538 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.538 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.538 07:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.538 07:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.538 07:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.538 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.538 { 00:18:34.538 "cntlid": 3, 00:18:34.538 "qid": 0, 00:18:34.538 "state": "enabled", 00:18:34.538 "thread": "nvmf_tgt_poll_group_000", 00:18:34.538 "listen_address": { 00:18:34.538 "trtype": "TCP", 00:18:34.538 "adrfam": "IPv4", 00:18:34.538 "traddr": "10.0.0.2", 00:18:34.538 "trsvcid": "4420" 00:18:34.538 }, 00:18:34.538 "peer_address": { 00:18:34.538 "trtype": "TCP", 00:18:34.538 "adrfam": "IPv4", 00:18:34.538 "traddr": "10.0.0.1", 00:18:34.538 "trsvcid": "54804" 00:18:34.538 }, 00:18:34.538 "auth": { 00:18:34.538 "state": "completed", 00:18:34.538 "digest": "sha256", 00:18:34.538 "dhgroup": "null" 00:18:34.538 } 00:18:34.538 } 00:18:34.538 ]' 00:18:34.538 07:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.795 07:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.795 07:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.795 07:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:34.795 07:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.795 07:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.795 07:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.795 07:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.052 07:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:18:35.984 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.984 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.984 07:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.984 07:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.984 07:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.984 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.984 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.984 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.243 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.502 00:18:36.502 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.502 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.502 07:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.760 { 00:18:36.760 "cntlid": 5, 00:18:36.760 "qid": 0, 00:18:36.760 "state": "enabled", 00:18:36.760 "thread": "nvmf_tgt_poll_group_000", 00:18:36.760 "listen_address": { 00:18:36.760 "trtype": "TCP", 00:18:36.760 "adrfam": "IPv4", 00:18:36.760 "traddr": "10.0.0.2", 00:18:36.760 "trsvcid": "4420" 00:18:36.760 }, 00:18:36.760 "peer_address": { 00:18:36.760 "trtype": "TCP", 00:18:36.760 "adrfam": "IPv4", 00:18:36.760 "traddr": "10.0.0.1", 00:18:36.760 "trsvcid": "54826" 00:18:36.760 }, 00:18:36.760 "auth": { 00:18:36.760 "state": "completed", 00:18:36.760 "digest": "sha256", 00:18:36.760 "dhgroup": "null" 00:18:36.760 } 00:18:36.760 } 00:18:36.760 ]' 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:36.760 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.018 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.018 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.018 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.276 07:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:18:38.210 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.210 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.210 07:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.210 07:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.210 07:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.210 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.210 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.210 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.468 07:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.725 00:18:38.725 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.725 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.725 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.983 { 00:18:38.983 "cntlid": 7, 00:18:38.983 "qid": 0, 00:18:38.983 "state": "enabled", 00:18:38.983 "thread": "nvmf_tgt_poll_group_000", 00:18:38.983 "listen_address": { 00:18:38.983 "trtype": "TCP", 00:18:38.983 "adrfam": "IPv4", 00:18:38.983 "traddr": "10.0.0.2", 00:18:38.983 "trsvcid": "4420" 00:18:38.983 }, 00:18:38.983 "peer_address": { 00:18:38.983 "trtype": "TCP", 00:18:38.983 "adrfam": "IPv4", 00:18:38.983 "traddr": "10.0.0.1", 00:18:38.983 "trsvcid": "45502" 00:18:38.983 }, 00:18:38.983 "auth": { 00:18:38.983 "state": "completed", 00:18:38.983 "digest": "sha256", 00:18:38.983 "dhgroup": "null" 00:18:38.983 } 00:18:38.983 } 00:18:38.983 ]' 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.983 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.240 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.240 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.498 07:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:18:40.431 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.431 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.431 07:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.431 07:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.431 07:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.431 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.431 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.431 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.432 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.432 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:40.432 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.432 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.432 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:40.432 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.432 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.432 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.432 07:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.432 07:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.689 07:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.689 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.689 07:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.946 00:18:40.946 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.947 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.947 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.205 { 00:18:41.205 "cntlid": 9, 00:18:41.205 "qid": 0, 00:18:41.205 "state": "enabled", 00:18:41.205 "thread": "nvmf_tgt_poll_group_000", 00:18:41.205 "listen_address": { 00:18:41.205 "trtype": "TCP", 00:18:41.205 "adrfam": "IPv4", 00:18:41.205 "traddr": "10.0.0.2", 00:18:41.205 "trsvcid": "4420" 00:18:41.205 }, 00:18:41.205 "peer_address": { 00:18:41.205 "trtype": "TCP", 00:18:41.205 "adrfam": "IPv4", 00:18:41.205 "traddr": "10.0.0.1", 00:18:41.205 "trsvcid": "45534" 00:18:41.205 }, 00:18:41.205 "auth": { 00:18:41.205 "state": "completed", 00:18:41.205 "digest": "sha256", 00:18:41.205 "dhgroup": "ffdhe2048" 00:18:41.205 } 00:18:41.205 } 00:18:41.205 ]' 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.205 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.463 07:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:18:42.429 07:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.429 07:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.429 07:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.429 07:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.429 07:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.429 07:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.429 07:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.429 07:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.687 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:42.687 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.687 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.687 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:42.687 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.687 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.687 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.687 07:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.688 07:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.688 07:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.688 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.688 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.945 00:18:43.203 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.203 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.203 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.203 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.203 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.203 07:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.203 07:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.203 07:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.203 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.203 { 00:18:43.203 "cntlid": 11, 00:18:43.203 "qid": 0, 00:18:43.203 "state": "enabled", 00:18:43.203 "thread": "nvmf_tgt_poll_group_000", 00:18:43.203 "listen_address": { 00:18:43.203 "trtype": "TCP", 00:18:43.203 "adrfam": "IPv4", 00:18:43.203 "traddr": "10.0.0.2", 00:18:43.203 "trsvcid": "4420" 00:18:43.203 }, 00:18:43.203 "peer_address": { 00:18:43.203 "trtype": "TCP", 00:18:43.203 "adrfam": "IPv4", 00:18:43.203 "traddr": "10.0.0.1", 00:18:43.203 "trsvcid": "45562" 00:18:43.203 }, 00:18:43.203 "auth": { 00:18:43.203 "state": "completed", 00:18:43.203 "digest": "sha256", 00:18:43.203 "dhgroup": "ffdhe2048" 00:18:43.203 } 00:18:43.203 } 00:18:43.203 ]' 00:18:43.203 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.461 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.461 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.461 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:43.461 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.461 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.461 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.461 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.717 07:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:18:44.649 07:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.649 07:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.649 07:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.649 07:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.649 07:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.649 07:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.649 07:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.649 07:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.907 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.165 00:18:45.165 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.165 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.165 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.422 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.422 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.422 07:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.422 07:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.422 07:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.422 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.422 { 00:18:45.422 "cntlid": 13, 00:18:45.422 "qid": 0, 00:18:45.422 "state": "enabled", 00:18:45.422 "thread": "nvmf_tgt_poll_group_000", 00:18:45.422 "listen_address": { 00:18:45.422 "trtype": "TCP", 00:18:45.422 "adrfam": "IPv4", 00:18:45.422 "traddr": "10.0.0.2", 00:18:45.422 "trsvcid": "4420" 00:18:45.422 }, 00:18:45.422 "peer_address": { 00:18:45.422 "trtype": "TCP", 00:18:45.422 "adrfam": "IPv4", 00:18:45.422 "traddr": "10.0.0.1", 00:18:45.422 "trsvcid": "45578" 00:18:45.422 }, 00:18:45.422 "auth": { 00:18:45.422 "state": "completed", 00:18:45.422 "digest": "sha256", 00:18:45.422 "dhgroup": "ffdhe2048" 00:18:45.422 } 00:18:45.422 } 00:18:45.422 ]' 00:18:45.422 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.422 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.422 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.679 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.679 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.679 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.679 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.679 07:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.937 07:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:18:46.867 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.867 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.867 07:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.867 07:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.867 07:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.868 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.868 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.868 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.125 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.382 00:18:47.382 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.382 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.382 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.639 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.640 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.640 07:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.640 07:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.640 07:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.640 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.640 { 00:18:47.640 "cntlid": 15, 00:18:47.640 "qid": 0, 00:18:47.640 "state": "enabled", 00:18:47.640 "thread": "nvmf_tgt_poll_group_000", 00:18:47.640 "listen_address": { 00:18:47.640 "trtype": "TCP", 00:18:47.640 "adrfam": "IPv4", 00:18:47.640 "traddr": "10.0.0.2", 00:18:47.640 "trsvcid": "4420" 00:18:47.640 }, 00:18:47.640 "peer_address": { 00:18:47.640 "trtype": "TCP", 00:18:47.640 "adrfam": "IPv4", 00:18:47.640 "traddr": "10.0.0.1", 00:18:47.640 "trsvcid": "45596" 00:18:47.640 }, 00:18:47.640 "auth": { 00:18:47.640 "state": "completed", 00:18:47.640 "digest": "sha256", 00:18:47.640 "dhgroup": "ffdhe2048" 00:18:47.640 } 00:18:47.640 } 00:18:47.640 ]' 00:18:47.640 07:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.640 07:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.640 07:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.640 07:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.640 07:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.640 07:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.640 07:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.640 07:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.898 07:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:18:48.830 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.088 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.088 07:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.088 07:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.088 07:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.088 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.088 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.088 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:49.088 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.346 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.603 00:18:49.603 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.603 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.603 07:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.861 { 00:18:49.861 "cntlid": 17, 00:18:49.861 "qid": 0, 00:18:49.861 "state": "enabled", 00:18:49.861 "thread": "nvmf_tgt_poll_group_000", 00:18:49.861 "listen_address": { 00:18:49.861 "trtype": "TCP", 00:18:49.861 "adrfam": "IPv4", 00:18:49.861 "traddr": "10.0.0.2", 00:18:49.861 "trsvcid": "4420" 00:18:49.861 }, 00:18:49.861 "peer_address": { 00:18:49.861 "trtype": "TCP", 00:18:49.861 "adrfam": "IPv4", 00:18:49.861 "traddr": "10.0.0.1", 00:18:49.861 "trsvcid": "60142" 00:18:49.861 }, 00:18:49.861 "auth": { 00:18:49.861 "state": "completed", 00:18:49.861 "digest": "sha256", 00:18:49.861 "dhgroup": "ffdhe3072" 00:18:49.861 } 00:18:49.861 } 00:18:49.861 ]' 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.861 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.120 07:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.493 07:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.751 00:18:51.751 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.751 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.751 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.009 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.009 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.009 07:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.009 07:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.009 07:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.009 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.009 { 00:18:52.009 "cntlid": 19, 00:18:52.009 "qid": 0, 00:18:52.009 "state": "enabled", 00:18:52.009 "thread": "nvmf_tgt_poll_group_000", 00:18:52.009 "listen_address": { 00:18:52.009 "trtype": "TCP", 00:18:52.009 "adrfam": "IPv4", 00:18:52.009 "traddr": "10.0.0.2", 00:18:52.009 "trsvcid": "4420" 00:18:52.009 }, 00:18:52.009 "peer_address": { 00:18:52.009 "trtype": "TCP", 00:18:52.009 "adrfam": "IPv4", 00:18:52.009 "traddr": "10.0.0.1", 00:18:52.009 "trsvcid": "60160" 00:18:52.009 }, 00:18:52.009 "auth": { 00:18:52.009 "state": "completed", 00:18:52.009 "digest": "sha256", 00:18:52.009 "dhgroup": "ffdhe3072" 00:18:52.009 } 00:18:52.009 } 00:18:52.009 ]' 00:18:52.009 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.009 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.009 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.267 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.267 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.267 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.267 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.267 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.524 07:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:18:53.457 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.457 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.457 07:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.457 07:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.457 07:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.457 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.457 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.457 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.713 07:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.970 00:18:53.970 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.970 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.970 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.230 { 00:18:54.230 "cntlid": 21, 00:18:54.230 "qid": 0, 00:18:54.230 "state": "enabled", 00:18:54.230 "thread": "nvmf_tgt_poll_group_000", 00:18:54.230 "listen_address": { 00:18:54.230 "trtype": "TCP", 00:18:54.230 "adrfam": "IPv4", 00:18:54.230 "traddr": "10.0.0.2", 00:18:54.230 "trsvcid": "4420" 00:18:54.230 }, 00:18:54.230 "peer_address": { 00:18:54.230 "trtype": "TCP", 00:18:54.230 "adrfam": "IPv4", 00:18:54.230 "traddr": "10.0.0.1", 00:18:54.230 "trsvcid": "60176" 00:18:54.230 }, 00:18:54.230 "auth": { 00:18:54.230 "state": "completed", 00:18:54.230 "digest": "sha256", 00:18:54.230 "dhgroup": "ffdhe3072" 00:18:54.230 } 00:18:54.230 } 00:18:54.230 ]' 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.230 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.492 07:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:18:55.863 07:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.863 07:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.863 07:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.863 07:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 07:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.863 07:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.863 07:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.863 07:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.863 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.427 00:18:56.427 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.427 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.427 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.427 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.427 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.427 07:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.427 07:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.427 07:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.427 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.427 { 00:18:56.427 "cntlid": 23, 00:18:56.427 "qid": 0, 00:18:56.427 "state": "enabled", 00:18:56.427 "thread": "nvmf_tgt_poll_group_000", 00:18:56.427 "listen_address": { 00:18:56.427 "trtype": "TCP", 00:18:56.427 "adrfam": "IPv4", 00:18:56.427 "traddr": "10.0.0.2", 00:18:56.427 "trsvcid": "4420" 00:18:56.427 }, 00:18:56.427 "peer_address": { 00:18:56.427 "trtype": "TCP", 00:18:56.427 "adrfam": "IPv4", 00:18:56.427 "traddr": "10.0.0.1", 00:18:56.427 "trsvcid": "60210" 00:18:56.427 }, 00:18:56.427 "auth": { 00:18:56.427 "state": "completed", 00:18:56.427 "digest": "sha256", 00:18:56.427 "dhgroup": "ffdhe3072" 00:18:56.427 } 00:18:56.427 } 00:18:56.427 ]' 00:18:56.427 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.684 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.684 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.684 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.684 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.684 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.684 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.684 07:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.941 07:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:18:57.895 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.895 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.895 07:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.895 07:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.895 07:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.895 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.895 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.895 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:57.895 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.153 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.411 00:18:58.411 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.411 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.411 07:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.669 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.669 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.669 07:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.669 07:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.669 07:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.669 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.669 { 00:18:58.669 "cntlid": 25, 00:18:58.669 "qid": 0, 00:18:58.669 "state": "enabled", 00:18:58.669 "thread": "nvmf_tgt_poll_group_000", 00:18:58.669 "listen_address": { 00:18:58.669 "trtype": "TCP", 00:18:58.669 "adrfam": "IPv4", 00:18:58.669 "traddr": "10.0.0.2", 00:18:58.669 "trsvcid": "4420" 00:18:58.669 }, 00:18:58.669 "peer_address": { 00:18:58.669 "trtype": "TCP", 00:18:58.669 "adrfam": "IPv4", 00:18:58.669 "traddr": "10.0.0.1", 00:18:58.669 "trsvcid": "39376" 00:18:58.669 }, 00:18:58.669 "auth": { 00:18:58.669 "state": "completed", 00:18:58.669 "digest": "sha256", 00:18:58.669 "dhgroup": "ffdhe4096" 00:18:58.669 } 00:18:58.669 } 00:18:58.669 ]' 00:18:58.669 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.927 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.927 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.927 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:58.927 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.927 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.927 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.927 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.185 07:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:19:00.117 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.117 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.118 07:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.118 07:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.118 07:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.118 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.118 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:00.118 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.374 07:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.937 00:19:00.937 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.937 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.937 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.937 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.937 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.937 07:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.937 07:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.194 07:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.194 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.194 { 00:19:01.194 "cntlid": 27, 00:19:01.194 "qid": 0, 00:19:01.194 "state": "enabled", 00:19:01.194 "thread": "nvmf_tgt_poll_group_000", 00:19:01.194 "listen_address": { 00:19:01.194 "trtype": "TCP", 00:19:01.194 "adrfam": "IPv4", 00:19:01.194 "traddr": "10.0.0.2", 00:19:01.194 "trsvcid": "4420" 00:19:01.194 }, 00:19:01.194 "peer_address": { 00:19:01.194 "trtype": "TCP", 00:19:01.194 "adrfam": "IPv4", 00:19:01.194 "traddr": "10.0.0.1", 00:19:01.194 "trsvcid": "39402" 00:19:01.194 }, 00:19:01.194 "auth": { 00:19:01.194 "state": "completed", 00:19:01.194 "digest": "sha256", 00:19:01.194 "dhgroup": "ffdhe4096" 00:19:01.194 } 00:19:01.194 } 00:19:01.194 ]' 00:19:01.194 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.194 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.194 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.194 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:01.194 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.194 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.194 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.194 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.452 07:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:19:02.385 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.385 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.385 07:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.385 07:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.385 07:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.385 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.385 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.385 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.643 07:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.901 00:19:02.901 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.901 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.901 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.158 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.158 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.158 07:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.158 07:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.158 07:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.158 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.158 { 00:19:03.158 "cntlid": 29, 00:19:03.158 "qid": 0, 00:19:03.158 "state": "enabled", 00:19:03.158 "thread": "nvmf_tgt_poll_group_000", 00:19:03.158 "listen_address": { 00:19:03.158 "trtype": "TCP", 00:19:03.158 "adrfam": "IPv4", 00:19:03.158 "traddr": "10.0.0.2", 00:19:03.158 "trsvcid": "4420" 00:19:03.158 }, 00:19:03.158 "peer_address": { 00:19:03.158 "trtype": "TCP", 00:19:03.158 "adrfam": "IPv4", 00:19:03.158 "traddr": "10.0.0.1", 00:19:03.158 "trsvcid": "39424" 00:19:03.158 }, 00:19:03.158 "auth": { 00:19:03.158 "state": "completed", 00:19:03.158 "digest": "sha256", 00:19:03.158 "dhgroup": "ffdhe4096" 00:19:03.158 } 00:19:03.158 } 00:19:03.158 ]' 00:19:03.158 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.416 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.416 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.416 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.416 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.416 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.416 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.416 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.673 07:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:19:04.602 07:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.602 07:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.602 07:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.602 07:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.602 07:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.602 07:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.602 07:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.602 07:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.859 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.422 00:19:05.422 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.422 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.422 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.679 { 00:19:05.679 "cntlid": 31, 00:19:05.679 "qid": 0, 00:19:05.679 "state": "enabled", 00:19:05.679 "thread": "nvmf_tgt_poll_group_000", 00:19:05.679 "listen_address": { 00:19:05.679 "trtype": "TCP", 00:19:05.679 "adrfam": "IPv4", 00:19:05.679 "traddr": "10.0.0.2", 00:19:05.679 "trsvcid": "4420" 00:19:05.679 }, 00:19:05.679 "peer_address": { 00:19:05.679 "trtype": "TCP", 00:19:05.679 "adrfam": "IPv4", 00:19:05.679 "traddr": "10.0.0.1", 00:19:05.679 "trsvcid": "39456" 00:19:05.679 }, 00:19:05.679 "auth": { 00:19:05.679 "state": "completed", 00:19:05.679 "digest": "sha256", 00:19:05.679 "dhgroup": "ffdhe4096" 00:19:05.679 } 00:19:05.679 } 00:19:05.679 ]' 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.679 07:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.679 07:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.679 07:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.679 07:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.936 07:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:19:06.868 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.868 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.868 07:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.868 07:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.868 07:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.868 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.868 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.868 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:06.868 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.151 07:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.722 00:19:07.722 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.722 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.722 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.980 { 00:19:07.980 "cntlid": 33, 00:19:07.980 "qid": 0, 00:19:07.980 "state": "enabled", 00:19:07.980 "thread": "nvmf_tgt_poll_group_000", 00:19:07.980 "listen_address": { 00:19:07.980 "trtype": "TCP", 00:19:07.980 "adrfam": "IPv4", 00:19:07.980 "traddr": "10.0.0.2", 00:19:07.980 "trsvcid": "4420" 00:19:07.980 }, 00:19:07.980 "peer_address": { 00:19:07.980 "trtype": "TCP", 00:19:07.980 "adrfam": "IPv4", 00:19:07.980 "traddr": "10.0.0.1", 00:19:07.980 "trsvcid": "39486" 00:19:07.980 }, 00:19:07.980 "auth": { 00:19:07.980 "state": "completed", 00:19:07.980 "digest": "sha256", 00:19:07.980 "dhgroup": "ffdhe6144" 00:19:07.980 } 00:19:07.980 } 00:19:07.980 ]' 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.980 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.239 07:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.612 07:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.177 00:19:10.177 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.177 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.177 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.435 { 00:19:10.435 "cntlid": 35, 00:19:10.435 "qid": 0, 00:19:10.435 "state": "enabled", 00:19:10.435 "thread": "nvmf_tgt_poll_group_000", 00:19:10.435 "listen_address": { 00:19:10.435 "trtype": "TCP", 00:19:10.435 "adrfam": "IPv4", 00:19:10.435 "traddr": "10.0.0.2", 00:19:10.435 "trsvcid": "4420" 00:19:10.435 }, 00:19:10.435 "peer_address": { 00:19:10.435 "trtype": "TCP", 00:19:10.435 "adrfam": "IPv4", 00:19:10.435 "traddr": "10.0.0.1", 00:19:10.435 "trsvcid": "41272" 00:19:10.435 }, 00:19:10.435 "auth": { 00:19:10.435 "state": "completed", 00:19:10.435 "digest": "sha256", 00:19:10.435 "dhgroup": "ffdhe6144" 00:19:10.435 } 00:19:10.435 } 00:19:10.435 ]' 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.435 07:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.001 07:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:19:11.928 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.928 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.928 07:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.928 07:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.928 07:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.928 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.928 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.928 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.185 07:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.798 00:19:12.798 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.798 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.798 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.055 { 00:19:13.055 "cntlid": 37, 00:19:13.055 "qid": 0, 00:19:13.055 "state": "enabled", 00:19:13.055 "thread": "nvmf_tgt_poll_group_000", 00:19:13.055 "listen_address": { 00:19:13.055 "trtype": "TCP", 00:19:13.055 "adrfam": "IPv4", 00:19:13.055 "traddr": "10.0.0.2", 00:19:13.055 "trsvcid": "4420" 00:19:13.055 }, 00:19:13.055 "peer_address": { 00:19:13.055 "trtype": "TCP", 00:19:13.055 "adrfam": "IPv4", 00:19:13.055 "traddr": "10.0.0.1", 00:19:13.055 "trsvcid": "41306" 00:19:13.055 }, 00:19:13.055 "auth": { 00:19:13.055 "state": "completed", 00:19:13.055 "digest": "sha256", 00:19:13.055 "dhgroup": "ffdhe6144" 00:19:13.055 } 00:19:13.055 } 00:19:13.055 ]' 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.055 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.312 07:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.681 07:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.246 00:19:15.246 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.246 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.246 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.505 { 00:19:15.505 "cntlid": 39, 00:19:15.505 "qid": 0, 00:19:15.505 "state": "enabled", 00:19:15.505 "thread": "nvmf_tgt_poll_group_000", 00:19:15.505 "listen_address": { 00:19:15.505 "trtype": "TCP", 00:19:15.505 "adrfam": "IPv4", 00:19:15.505 "traddr": "10.0.0.2", 00:19:15.505 "trsvcid": "4420" 00:19:15.505 }, 00:19:15.505 "peer_address": { 00:19:15.505 "trtype": "TCP", 00:19:15.505 "adrfam": "IPv4", 00:19:15.505 "traddr": "10.0.0.1", 00:19:15.505 "trsvcid": "41342" 00:19:15.505 }, 00:19:15.505 "auth": { 00:19:15.505 "state": "completed", 00:19:15.505 "digest": "sha256", 00:19:15.505 "dhgroup": "ffdhe6144" 00:19:15.505 } 00:19:15.505 } 00:19:15.505 ]' 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.505 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.763 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.763 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.763 07:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.021 07:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:19:16.953 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.953 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.953 07:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.953 07:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.953 07:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.953 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.953 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.953 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.953 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.211 07:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.144 00:19:18.144 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.144 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.144 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.402 { 00:19:18.402 "cntlid": 41, 00:19:18.402 "qid": 0, 00:19:18.402 "state": "enabled", 00:19:18.402 "thread": "nvmf_tgt_poll_group_000", 00:19:18.402 "listen_address": { 00:19:18.402 "trtype": "TCP", 00:19:18.402 "adrfam": "IPv4", 00:19:18.402 "traddr": "10.0.0.2", 00:19:18.402 "trsvcid": "4420" 00:19:18.402 }, 00:19:18.402 "peer_address": { 00:19:18.402 "trtype": "TCP", 00:19:18.402 "adrfam": "IPv4", 00:19:18.402 "traddr": "10.0.0.1", 00:19:18.402 "trsvcid": "41370" 00:19:18.402 }, 00:19:18.402 "auth": { 00:19:18.402 "state": "completed", 00:19:18.402 "digest": "sha256", 00:19:18.402 "dhgroup": "ffdhe8192" 00:19:18.402 } 00:19:18.402 } 00:19:18.402 ]' 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.402 07:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.659 07:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:19:19.589 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.589 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.589 07:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.589 07:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.847 07:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.107 07:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.107 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.107 07:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.056 00:19:21.056 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.056 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.056 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.056 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.056 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.056 07:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.056 07:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.056 07:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.056 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.056 { 00:19:21.056 "cntlid": 43, 00:19:21.056 "qid": 0, 00:19:21.056 "state": "enabled", 00:19:21.056 "thread": "nvmf_tgt_poll_group_000", 00:19:21.056 "listen_address": { 00:19:21.056 "trtype": "TCP", 00:19:21.056 "adrfam": "IPv4", 00:19:21.056 "traddr": "10.0.0.2", 00:19:21.056 "trsvcid": "4420" 00:19:21.056 }, 00:19:21.056 "peer_address": { 00:19:21.056 "trtype": "TCP", 00:19:21.056 "adrfam": "IPv4", 00:19:21.056 "traddr": "10.0.0.1", 00:19:21.056 "trsvcid": "51966" 00:19:21.056 }, 00:19:21.056 "auth": { 00:19:21.056 "state": "completed", 00:19:21.056 "digest": "sha256", 00:19:21.056 "dhgroup": "ffdhe8192" 00:19:21.056 } 00:19:21.056 } 00:19:21.056 ]' 00:19:21.056 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.313 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.313 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.313 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.313 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.313 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.313 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.313 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.571 07:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:19:22.501 07:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.501 07:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.501 07:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.501 07:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.501 07:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.501 07:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.501 07:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.501 07:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.758 07:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.693 00:19:23.693 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.693 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.693 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.949 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.949 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.949 07:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.949 07:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.949 07:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.949 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.949 { 00:19:23.949 "cntlid": 45, 00:19:23.949 "qid": 0, 00:19:23.949 "state": "enabled", 00:19:23.949 "thread": "nvmf_tgt_poll_group_000", 00:19:23.949 "listen_address": { 00:19:23.949 "trtype": "TCP", 00:19:23.949 "adrfam": "IPv4", 00:19:23.949 "traddr": "10.0.0.2", 00:19:23.949 "trsvcid": "4420" 00:19:23.949 }, 00:19:23.949 "peer_address": { 00:19:23.949 "trtype": "TCP", 00:19:23.949 "adrfam": "IPv4", 00:19:23.949 "traddr": "10.0.0.1", 00:19:23.949 "trsvcid": "51988" 00:19:23.949 }, 00:19:23.949 "auth": { 00:19:23.949 "state": "completed", 00:19:23.949 "digest": "sha256", 00:19:23.949 "dhgroup": "ffdhe8192" 00:19:23.949 } 00:19:23.949 } 00:19:23.949 ]' 00:19:23.949 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.949 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.949 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.206 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.206 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.206 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.206 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.206 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.463 07:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:19:25.396 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.396 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.396 07:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.396 07:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.396 07:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.396 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.396 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.396 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.653 07:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.588 00:19:26.588 07:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.589 07:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.589 07:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.846 { 00:19:26.846 "cntlid": 47, 00:19:26.846 "qid": 0, 00:19:26.846 "state": "enabled", 00:19:26.846 "thread": "nvmf_tgt_poll_group_000", 00:19:26.846 "listen_address": { 00:19:26.846 "trtype": "TCP", 00:19:26.846 "adrfam": "IPv4", 00:19:26.846 "traddr": "10.0.0.2", 00:19:26.846 "trsvcid": "4420" 00:19:26.846 }, 00:19:26.846 "peer_address": { 00:19:26.846 "trtype": "TCP", 00:19:26.846 "adrfam": "IPv4", 00:19:26.846 "traddr": "10.0.0.1", 00:19:26.846 "trsvcid": "52006" 00:19:26.846 }, 00:19:26.846 "auth": { 00:19:26.846 "state": "completed", 00:19:26.846 "digest": "sha256", 00:19:26.846 "dhgroup": "ffdhe8192" 00:19:26.846 } 00:19:26.846 } 00:19:26.846 ]' 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.846 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.104 07:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.477 07:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.735 00:19:28.993 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.993 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.993 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.993 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.993 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.993 07:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.993 07:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.251 07:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.251 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.251 { 00:19:29.251 "cntlid": 49, 00:19:29.251 "qid": 0, 00:19:29.251 "state": "enabled", 00:19:29.251 "thread": "nvmf_tgt_poll_group_000", 00:19:29.251 "listen_address": { 00:19:29.251 "trtype": "TCP", 00:19:29.251 "adrfam": "IPv4", 00:19:29.251 "traddr": "10.0.0.2", 00:19:29.251 "trsvcid": "4420" 00:19:29.251 }, 00:19:29.251 "peer_address": { 00:19:29.251 "trtype": "TCP", 00:19:29.251 "adrfam": "IPv4", 00:19:29.251 "traddr": "10.0.0.1", 00:19:29.251 "trsvcid": "59578" 00:19:29.251 }, 00:19:29.251 "auth": { 00:19:29.251 "state": "completed", 00:19:29.251 "digest": "sha384", 00:19:29.251 "dhgroup": "null" 00:19:29.251 } 00:19:29.251 } 00:19:29.251 ]' 00:19:29.251 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.251 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.251 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.251 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:29.251 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.251 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.251 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.251 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.509 07:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:19:30.443 07:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.443 07:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.443 07:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.443 07:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.443 07:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.443 07:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.443 07:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.443 07:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.701 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:30.701 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.701 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.701 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:30.701 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:30.701 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.701 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.701 07:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.701 07:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.701 07:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.702 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.702 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.267 00:19:31.267 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.267 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.267 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.267 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.267 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.267 07:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.267 07:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.267 07:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.267 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.267 { 00:19:31.267 "cntlid": 51, 00:19:31.267 "qid": 0, 00:19:31.267 "state": "enabled", 00:19:31.267 "thread": "nvmf_tgt_poll_group_000", 00:19:31.267 "listen_address": { 00:19:31.267 "trtype": "TCP", 00:19:31.267 "adrfam": "IPv4", 00:19:31.267 "traddr": "10.0.0.2", 00:19:31.267 "trsvcid": "4420" 00:19:31.267 }, 00:19:31.267 "peer_address": { 00:19:31.267 "trtype": "TCP", 00:19:31.267 "adrfam": "IPv4", 00:19:31.267 "traddr": "10.0.0.1", 00:19:31.267 "trsvcid": "59620" 00:19:31.267 }, 00:19:31.267 "auth": { 00:19:31.267 "state": "completed", 00:19:31.267 "digest": "sha384", 00:19:31.267 "dhgroup": "null" 00:19:31.267 } 00:19:31.267 } 00:19:31.267 ]' 00:19:31.267 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.525 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.525 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.525 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:31.525 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.525 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.525 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.525 07:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.781 07:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:19:32.713 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.713 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.713 07:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.713 07:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.713 07:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.713 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.713 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.713 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.971 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.545 00:19:33.545 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.545 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.545 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.545 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.545 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.545 07:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.545 07:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.545 07:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.545 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.545 { 00:19:33.545 "cntlid": 53, 00:19:33.545 "qid": 0, 00:19:33.545 "state": "enabled", 00:19:33.545 "thread": "nvmf_tgt_poll_group_000", 00:19:33.545 "listen_address": { 00:19:33.545 "trtype": "TCP", 00:19:33.545 "adrfam": "IPv4", 00:19:33.545 "traddr": "10.0.0.2", 00:19:33.545 "trsvcid": "4420" 00:19:33.545 }, 00:19:33.545 "peer_address": { 00:19:33.545 "trtype": "TCP", 00:19:33.545 "adrfam": "IPv4", 00:19:33.545 "traddr": "10.0.0.1", 00:19:33.545 "trsvcid": "59648" 00:19:33.545 }, 00:19:33.545 "auth": { 00:19:33.545 "state": "completed", 00:19:33.545 "digest": "sha384", 00:19:33.545 "dhgroup": "null" 00:19:33.545 } 00:19:33.545 } 00:19:33.545 ]' 00:19:33.545 07:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.803 07:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.803 07:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.803 07:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:33.803 07:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.803 07:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.803 07:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.803 07:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.060 07:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:19:34.991 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.991 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.991 07:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.991 07:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.991 07:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.992 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.992 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.992 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.249 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.507 00:19:35.507 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.507 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.507 07:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.764 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.764 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.764 07:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.764 07:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.764 07:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.764 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.764 { 00:19:35.764 "cntlid": 55, 00:19:35.764 "qid": 0, 00:19:35.764 "state": "enabled", 00:19:35.764 "thread": "nvmf_tgt_poll_group_000", 00:19:35.764 "listen_address": { 00:19:35.764 "trtype": "TCP", 00:19:35.764 "adrfam": "IPv4", 00:19:35.764 "traddr": "10.0.0.2", 00:19:35.764 "trsvcid": "4420" 00:19:35.764 }, 00:19:35.765 "peer_address": { 00:19:35.765 "trtype": "TCP", 00:19:35.765 "adrfam": "IPv4", 00:19:35.765 "traddr": "10.0.0.1", 00:19:35.765 "trsvcid": "59674" 00:19:35.765 }, 00:19:35.765 "auth": { 00:19:35.765 "state": "completed", 00:19:35.765 "digest": "sha384", 00:19:35.765 "dhgroup": "null" 00:19:35.765 } 00:19:35.765 } 00:19:35.765 ]' 00:19:35.765 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.765 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.022 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.022 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:36.022 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.022 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.022 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.022 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.279 07:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:19:37.211 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.211 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.211 07:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.211 07:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.211 07:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.211 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.211 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.211 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.211 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.469 07:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.726 00:19:37.726 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.726 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.726 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.984 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.984 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.984 07:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.984 07:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.984 07:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.984 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.984 { 00:19:37.984 "cntlid": 57, 00:19:37.984 "qid": 0, 00:19:37.984 "state": "enabled", 00:19:37.984 "thread": "nvmf_tgt_poll_group_000", 00:19:37.984 "listen_address": { 00:19:37.984 "trtype": "TCP", 00:19:37.984 "adrfam": "IPv4", 00:19:37.984 "traddr": "10.0.0.2", 00:19:37.984 "trsvcid": "4420" 00:19:37.984 }, 00:19:37.984 "peer_address": { 00:19:37.984 "trtype": "TCP", 00:19:37.984 "adrfam": "IPv4", 00:19:37.984 "traddr": "10.0.0.1", 00:19:37.984 "trsvcid": "59712" 00:19:37.984 }, 00:19:37.984 "auth": { 00:19:37.984 "state": "completed", 00:19:37.984 "digest": "sha384", 00:19:37.984 "dhgroup": "ffdhe2048" 00:19:37.984 } 00:19:37.984 } 00:19:37.984 ]' 00:19:37.984 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.984 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.984 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.241 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.241 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.241 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.241 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.241 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.498 07:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:19:39.429 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.429 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.429 07:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.429 07:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.429 07:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.429 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.429 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.429 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.687 07:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.944 00:19:39.944 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.944 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.944 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.201 { 00:19:40.201 "cntlid": 59, 00:19:40.201 "qid": 0, 00:19:40.201 "state": "enabled", 00:19:40.201 "thread": "nvmf_tgt_poll_group_000", 00:19:40.201 "listen_address": { 00:19:40.201 "trtype": "TCP", 00:19:40.201 "adrfam": "IPv4", 00:19:40.201 "traddr": "10.0.0.2", 00:19:40.201 "trsvcid": "4420" 00:19:40.201 }, 00:19:40.201 "peer_address": { 00:19:40.201 "trtype": "TCP", 00:19:40.201 "adrfam": "IPv4", 00:19:40.201 "traddr": "10.0.0.1", 00:19:40.201 "trsvcid": "59596" 00:19:40.201 }, 00:19:40.201 "auth": { 00:19:40.201 "state": "completed", 00:19:40.201 "digest": "sha384", 00:19:40.201 "dhgroup": "ffdhe2048" 00:19:40.201 } 00:19:40.201 } 00:19:40.201 ]' 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.201 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.458 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.458 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.458 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.458 07:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:19:41.829 07:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.829 07:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.829 07:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.829 07:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.829 07:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.829 07:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.829 07:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.829 07:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.829 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.087 00:19:42.087 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.087 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.087 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.344 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.344 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.344 07:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.344 07:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.344 07:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.344 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.344 { 00:19:42.344 "cntlid": 61, 00:19:42.344 "qid": 0, 00:19:42.344 "state": "enabled", 00:19:42.344 "thread": "nvmf_tgt_poll_group_000", 00:19:42.344 "listen_address": { 00:19:42.344 "trtype": "TCP", 00:19:42.344 "adrfam": "IPv4", 00:19:42.344 "traddr": "10.0.0.2", 00:19:42.344 "trsvcid": "4420" 00:19:42.344 }, 00:19:42.344 "peer_address": { 00:19:42.344 "trtype": "TCP", 00:19:42.344 "adrfam": "IPv4", 00:19:42.344 "traddr": "10.0.0.1", 00:19:42.344 "trsvcid": "59630" 00:19:42.344 }, 00:19:42.344 "auth": { 00:19:42.344 "state": "completed", 00:19:42.344 "digest": "sha384", 00:19:42.344 "dhgroup": "ffdhe2048" 00:19:42.344 } 00:19:42.344 } 00:19:42.344 ]' 00:19:42.344 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.344 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.344 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.602 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.602 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.602 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.602 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.602 07:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.860 07:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:19:43.791 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.791 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.791 07:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.791 07:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.791 07:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.791 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.791 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.791 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.049 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.306 00:19:44.306 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.306 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.306 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.563 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.563 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.563 07:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.563 07:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.563 07:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.563 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.563 { 00:19:44.563 "cntlid": 63, 00:19:44.563 "qid": 0, 00:19:44.563 "state": "enabled", 00:19:44.563 "thread": "nvmf_tgt_poll_group_000", 00:19:44.563 "listen_address": { 00:19:44.563 "trtype": "TCP", 00:19:44.563 "adrfam": "IPv4", 00:19:44.563 "traddr": "10.0.0.2", 00:19:44.563 "trsvcid": "4420" 00:19:44.563 }, 00:19:44.563 "peer_address": { 00:19:44.563 "trtype": "TCP", 00:19:44.563 "adrfam": "IPv4", 00:19:44.563 "traddr": "10.0.0.1", 00:19:44.563 "trsvcid": "59668" 00:19:44.563 }, 00:19:44.563 "auth": { 00:19:44.563 "state": "completed", 00:19:44.563 "digest": "sha384", 00:19:44.563 "dhgroup": "ffdhe2048" 00:19:44.563 } 00:19:44.563 } 00:19:44.563 ]' 00:19:44.563 07:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.820 07:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.820 07:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.820 07:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.820 07:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.820 07:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.820 07:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.820 07:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.078 07:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:19:46.013 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.013 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.013 07:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.013 07:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.013 07:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.013 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.013 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.013 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.013 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.271 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.529 00:19:46.529 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.529 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.529 07:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.788 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.788 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.788 07:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.788 07:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.788 07:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.788 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.788 { 00:19:46.788 "cntlid": 65, 00:19:46.788 "qid": 0, 00:19:46.788 "state": "enabled", 00:19:46.788 "thread": "nvmf_tgt_poll_group_000", 00:19:46.788 "listen_address": { 00:19:46.788 "trtype": "TCP", 00:19:46.788 "adrfam": "IPv4", 00:19:46.788 "traddr": "10.0.0.2", 00:19:46.788 "trsvcid": "4420" 00:19:46.788 }, 00:19:46.788 "peer_address": { 00:19:46.788 "trtype": "TCP", 00:19:46.788 "adrfam": "IPv4", 00:19:46.788 "traddr": "10.0.0.1", 00:19:46.788 "trsvcid": "59688" 00:19:46.788 }, 00:19:46.788 "auth": { 00:19:46.788 "state": "completed", 00:19:46.788 "digest": "sha384", 00:19:46.788 "dhgroup": "ffdhe3072" 00:19:46.788 } 00:19:46.788 } 00:19:46.788 ]' 00:19:46.788 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.788 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.788 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.046 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:47.046 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.046 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.046 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.046 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.304 07:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:19:48.235 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.235 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.235 07:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.235 07:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.235 07:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.235 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.235 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.235 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.492 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:48.492 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.492 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.492 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:48.492 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:48.492 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.493 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.493 07:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.493 07:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.493 07:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.493 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.493 07:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.751 00:19:48.751 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.751 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.751 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.008 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.008 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.008 07:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.008 07:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.008 07:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.008 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.008 { 00:19:49.008 "cntlid": 67, 00:19:49.008 "qid": 0, 00:19:49.008 "state": "enabled", 00:19:49.008 "thread": "nvmf_tgt_poll_group_000", 00:19:49.008 "listen_address": { 00:19:49.008 "trtype": "TCP", 00:19:49.008 "adrfam": "IPv4", 00:19:49.008 "traddr": "10.0.0.2", 00:19:49.008 "trsvcid": "4420" 00:19:49.008 }, 00:19:49.008 "peer_address": { 00:19:49.008 "trtype": "TCP", 00:19:49.008 "adrfam": "IPv4", 00:19:49.008 "traddr": "10.0.0.1", 00:19:49.008 "trsvcid": "54866" 00:19:49.008 }, 00:19:49.008 "auth": { 00:19:49.008 "state": "completed", 00:19:49.008 "digest": "sha384", 00:19:49.008 "dhgroup": "ffdhe3072" 00:19:49.008 } 00:19:49.008 } 00:19:49.008 ]' 00:19:49.008 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.008 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.008 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.266 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.266 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.266 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.266 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.266 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.522 07:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:19:50.453 07:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.453 07:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.453 07:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.453 07:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.453 07:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.453 07:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.453 07:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.453 07:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.710 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.967 00:19:50.967 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.967 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.967 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.225 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.225 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.225 07:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.225 07:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.225 07:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.225 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.225 { 00:19:51.225 "cntlid": 69, 00:19:51.225 "qid": 0, 00:19:51.225 "state": "enabled", 00:19:51.225 "thread": "nvmf_tgt_poll_group_000", 00:19:51.225 "listen_address": { 00:19:51.225 "trtype": "TCP", 00:19:51.225 "adrfam": "IPv4", 00:19:51.225 "traddr": "10.0.0.2", 00:19:51.225 "trsvcid": "4420" 00:19:51.225 }, 00:19:51.225 "peer_address": { 00:19:51.225 "trtype": "TCP", 00:19:51.225 "adrfam": "IPv4", 00:19:51.225 "traddr": "10.0.0.1", 00:19:51.225 "trsvcid": "54886" 00:19:51.225 }, 00:19:51.225 "auth": { 00:19:51.225 "state": "completed", 00:19:51.225 "digest": "sha384", 00:19:51.225 "dhgroup": "ffdhe3072" 00:19:51.225 } 00:19:51.225 } 00:19:51.225 ]' 00:19:51.225 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.225 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.225 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.482 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.482 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.482 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.482 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.483 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.740 07:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:19:52.673 07:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.673 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.673 07:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.673 07:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.673 07:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.673 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.673 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.673 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.930 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:52.930 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.930 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.930 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:52.930 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.930 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.930 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:52.931 07:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.931 07:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.931 07:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.931 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.931 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.188 00:19:53.188 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.188 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.188 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.446 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.446 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.446 07:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.446 07:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.446 07:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.446 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.446 { 00:19:53.446 "cntlid": 71, 00:19:53.446 "qid": 0, 00:19:53.446 "state": "enabled", 00:19:53.446 "thread": "nvmf_tgt_poll_group_000", 00:19:53.446 "listen_address": { 00:19:53.446 "trtype": "TCP", 00:19:53.446 "adrfam": "IPv4", 00:19:53.446 "traddr": "10.0.0.2", 00:19:53.446 "trsvcid": "4420" 00:19:53.446 }, 00:19:53.446 "peer_address": { 00:19:53.446 "trtype": "TCP", 00:19:53.446 "adrfam": "IPv4", 00:19:53.446 "traddr": "10.0.0.1", 00:19:53.446 "trsvcid": "54904" 00:19:53.446 }, 00:19:53.446 "auth": { 00:19:53.446 "state": "completed", 00:19:53.446 "digest": "sha384", 00:19:53.446 "dhgroup": "ffdhe3072" 00:19:53.446 } 00:19:53.446 } 00:19:53.446 ]' 00:19:53.446 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.704 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.704 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.704 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.704 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.704 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.704 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.704 07:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.960 07:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:19:54.891 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.891 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.891 07:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.891 07:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.891 07:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.891 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.891 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.891 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.891 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.148 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.405 00:19:55.405 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.405 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.405 07:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.663 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.663 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.663 07:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.663 07:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.663 07:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.663 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.663 { 00:19:55.663 "cntlid": 73, 00:19:55.663 "qid": 0, 00:19:55.663 "state": "enabled", 00:19:55.663 "thread": "nvmf_tgt_poll_group_000", 00:19:55.663 "listen_address": { 00:19:55.663 "trtype": "TCP", 00:19:55.663 "adrfam": "IPv4", 00:19:55.663 "traddr": "10.0.0.2", 00:19:55.663 "trsvcid": "4420" 00:19:55.663 }, 00:19:55.663 "peer_address": { 00:19:55.663 "trtype": "TCP", 00:19:55.663 "adrfam": "IPv4", 00:19:55.663 "traddr": "10.0.0.1", 00:19:55.663 "trsvcid": "54934" 00:19:55.663 }, 00:19:55.663 "auth": { 00:19:55.663 "state": "completed", 00:19:55.663 "digest": "sha384", 00:19:55.663 "dhgroup": "ffdhe4096" 00:19:55.663 } 00:19:55.663 } 00:19:55.663 ]' 00:19:55.663 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.663 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.663 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.921 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.921 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.921 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.921 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.921 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.179 07:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:19:57.113 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.113 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.113 07:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.113 07:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.113 07:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.113 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.113 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:57.113 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.371 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.372 07:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.944 00:19:57.944 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.944 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.944 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.944 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.944 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.944 07:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.944 07:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.944 07:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.944 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.944 { 00:19:57.944 "cntlid": 75, 00:19:57.944 "qid": 0, 00:19:57.944 "state": "enabled", 00:19:57.944 "thread": "nvmf_tgt_poll_group_000", 00:19:57.944 "listen_address": { 00:19:57.944 "trtype": "TCP", 00:19:57.944 "adrfam": "IPv4", 00:19:57.944 "traddr": "10.0.0.2", 00:19:57.944 "trsvcid": "4420" 00:19:57.944 }, 00:19:57.944 "peer_address": { 00:19:57.944 "trtype": "TCP", 00:19:57.944 "adrfam": "IPv4", 00:19:57.944 "traddr": "10.0.0.1", 00:19:57.944 "trsvcid": "54960" 00:19:57.944 }, 00:19:57.944 "auth": { 00:19:57.944 "state": "completed", 00:19:57.944 "digest": "sha384", 00:19:57.944 "dhgroup": "ffdhe4096" 00:19:57.944 } 00:19:57.944 } 00:19:57.944 ]' 00:19:57.944 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.202 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.202 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.202 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.202 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.202 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.202 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.202 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.460 07:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:19:59.394 07:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.394 07:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.394 07:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.394 07:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.394 07:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.394 07:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.394 07:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:59.394 07:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.652 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.218 00:20:00.218 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.218 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.218 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.218 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.218 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.218 07:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.218 07:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.476 07:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.476 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.476 { 00:20:00.476 "cntlid": 77, 00:20:00.476 "qid": 0, 00:20:00.476 "state": "enabled", 00:20:00.476 "thread": "nvmf_tgt_poll_group_000", 00:20:00.476 "listen_address": { 00:20:00.476 "trtype": "TCP", 00:20:00.476 "adrfam": "IPv4", 00:20:00.476 "traddr": "10.0.0.2", 00:20:00.476 "trsvcid": "4420" 00:20:00.476 }, 00:20:00.476 "peer_address": { 00:20:00.476 "trtype": "TCP", 00:20:00.476 "adrfam": "IPv4", 00:20:00.476 "traddr": "10.0.0.1", 00:20:00.476 "trsvcid": "44078" 00:20:00.476 }, 00:20:00.476 "auth": { 00:20:00.476 "state": "completed", 00:20:00.476 "digest": "sha384", 00:20:00.476 "dhgroup": "ffdhe4096" 00:20:00.476 } 00:20:00.476 } 00:20:00.476 ]' 00:20:00.476 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.476 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.476 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.476 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.476 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.476 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.476 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.476 07:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.733 07:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:20:01.665 07:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.665 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.665 07:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.665 07:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.665 07:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.665 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.665 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.665 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.923 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.487 00:20:02.487 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.487 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.487 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.744 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.744 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.744 07:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.744 07:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.744 07:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.744 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.744 { 00:20:02.744 "cntlid": 79, 00:20:02.744 "qid": 0, 00:20:02.744 "state": "enabled", 00:20:02.744 "thread": "nvmf_tgt_poll_group_000", 00:20:02.744 "listen_address": { 00:20:02.744 "trtype": "TCP", 00:20:02.744 "adrfam": "IPv4", 00:20:02.744 "traddr": "10.0.0.2", 00:20:02.744 "trsvcid": "4420" 00:20:02.744 }, 00:20:02.744 "peer_address": { 00:20:02.744 "trtype": "TCP", 00:20:02.744 "adrfam": "IPv4", 00:20:02.744 "traddr": "10.0.0.1", 00:20:02.744 "trsvcid": "44094" 00:20:02.744 }, 00:20:02.744 "auth": { 00:20:02.744 "state": "completed", 00:20:02.744 "digest": "sha384", 00:20:02.744 "dhgroup": "ffdhe4096" 00:20:02.744 } 00:20:02.744 } 00:20:02.744 ]' 00:20:02.744 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.744 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.744 07:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.744 07:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.744 07:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.744 07:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.744 07:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.744 07:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.000 07:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:20:03.932 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.932 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.932 07:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.932 07:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.932 07:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.932 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.932 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.932 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.932 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:04.188 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:04.188 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.188 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.188 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:04.188 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:04.188 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.189 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.189 07:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.189 07:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.189 07:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.189 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.189 07:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.752 00:20:04.752 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.752 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.752 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.009 { 00:20:05.009 "cntlid": 81, 00:20:05.009 "qid": 0, 00:20:05.009 "state": "enabled", 00:20:05.009 "thread": "nvmf_tgt_poll_group_000", 00:20:05.009 "listen_address": { 00:20:05.009 "trtype": "TCP", 00:20:05.009 "adrfam": "IPv4", 00:20:05.009 "traddr": "10.0.0.2", 00:20:05.009 "trsvcid": "4420" 00:20:05.009 }, 00:20:05.009 "peer_address": { 00:20:05.009 "trtype": "TCP", 00:20:05.009 "adrfam": "IPv4", 00:20:05.009 "traddr": "10.0.0.1", 00:20:05.009 "trsvcid": "44126" 00:20:05.009 }, 00:20:05.009 "auth": { 00:20:05.009 "state": "completed", 00:20:05.009 "digest": "sha384", 00:20:05.009 "dhgroup": "ffdhe6144" 00:20:05.009 } 00:20:05.009 } 00:20:05.009 ]' 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.009 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.266 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.266 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.266 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.524 07:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:20:06.456 07:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.456 07:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.456 07:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.456 07:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.456 07:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.456 07:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.457 07:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:06.457 07:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.715 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.278 00:20:07.278 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.278 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.278 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.536 { 00:20:07.536 "cntlid": 83, 00:20:07.536 "qid": 0, 00:20:07.536 "state": "enabled", 00:20:07.536 "thread": "nvmf_tgt_poll_group_000", 00:20:07.536 "listen_address": { 00:20:07.536 "trtype": "TCP", 00:20:07.536 "adrfam": "IPv4", 00:20:07.536 "traddr": "10.0.0.2", 00:20:07.536 "trsvcid": "4420" 00:20:07.536 }, 00:20:07.536 "peer_address": { 00:20:07.536 "trtype": "TCP", 00:20:07.536 "adrfam": "IPv4", 00:20:07.536 "traddr": "10.0.0.1", 00:20:07.536 "trsvcid": "44152" 00:20:07.536 }, 00:20:07.536 "auth": { 00:20:07.536 "state": "completed", 00:20:07.536 "digest": "sha384", 00:20:07.536 "dhgroup": "ffdhe6144" 00:20:07.536 } 00:20:07.536 } 00:20:07.536 ]' 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.536 07:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.794 07:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:20:08.728 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.728 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.728 07:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.728 07:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.728 07:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.728 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.728 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.728 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.986 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.552 00:20:09.552 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.552 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.552 07:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.810 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.810 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.810 07:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.810 07:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.810 07:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.810 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.810 { 00:20:09.810 "cntlid": 85, 00:20:09.810 "qid": 0, 00:20:09.810 "state": "enabled", 00:20:09.810 "thread": "nvmf_tgt_poll_group_000", 00:20:09.810 "listen_address": { 00:20:09.810 "trtype": "TCP", 00:20:09.810 "adrfam": "IPv4", 00:20:09.810 "traddr": "10.0.0.2", 00:20:09.810 "trsvcid": "4420" 00:20:09.810 }, 00:20:09.810 "peer_address": { 00:20:09.810 "trtype": "TCP", 00:20:09.810 "adrfam": "IPv4", 00:20:09.810 "traddr": "10.0.0.1", 00:20:09.810 "trsvcid": "60994" 00:20:09.810 }, 00:20:09.810 "auth": { 00:20:09.810 "state": "completed", 00:20:09.810 "digest": "sha384", 00:20:09.810 "dhgroup": "ffdhe6144" 00:20:09.810 } 00:20:09.810 } 00:20:09.810 ]' 00:20:09.810 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.810 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.810 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.068 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.068 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.068 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.068 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.068 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.338 07:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:20:11.301 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.301 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.301 07:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.301 07:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.301 07:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.301 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.301 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.301 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.559 07:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.123 00:20:12.123 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.123 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.123 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.381 { 00:20:12.381 "cntlid": 87, 00:20:12.381 "qid": 0, 00:20:12.381 "state": "enabled", 00:20:12.381 "thread": "nvmf_tgt_poll_group_000", 00:20:12.381 "listen_address": { 00:20:12.381 "trtype": "TCP", 00:20:12.381 "adrfam": "IPv4", 00:20:12.381 "traddr": "10.0.0.2", 00:20:12.381 "trsvcid": "4420" 00:20:12.381 }, 00:20:12.381 "peer_address": { 00:20:12.381 "trtype": "TCP", 00:20:12.381 "adrfam": "IPv4", 00:20:12.381 "traddr": "10.0.0.1", 00:20:12.381 "trsvcid": "32770" 00:20:12.381 }, 00:20:12.381 "auth": { 00:20:12.381 "state": "completed", 00:20:12.381 "digest": "sha384", 00:20:12.381 "dhgroup": "ffdhe6144" 00:20:12.381 } 00:20:12.381 } 00:20:12.381 ]' 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.381 07:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.639 07:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:20:13.571 07:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.571 07:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.571 07:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.571 07:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.571 07:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.571 07:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.571 07:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.571 07:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.571 07:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.829 07:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.761 00:20:14.761 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.761 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.761 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.017 { 00:20:15.017 "cntlid": 89, 00:20:15.017 "qid": 0, 00:20:15.017 "state": "enabled", 00:20:15.017 "thread": "nvmf_tgt_poll_group_000", 00:20:15.017 "listen_address": { 00:20:15.017 "trtype": "TCP", 00:20:15.017 "adrfam": "IPv4", 00:20:15.017 "traddr": "10.0.0.2", 00:20:15.017 "trsvcid": "4420" 00:20:15.017 }, 00:20:15.017 "peer_address": { 00:20:15.017 "trtype": "TCP", 00:20:15.017 "adrfam": "IPv4", 00:20:15.017 "traddr": "10.0.0.1", 00:20:15.017 "trsvcid": "32802" 00:20:15.017 }, 00:20:15.017 "auth": { 00:20:15.017 "state": "completed", 00:20:15.017 "digest": "sha384", 00:20:15.017 "dhgroup": "ffdhe8192" 00:20:15.017 } 00:20:15.017 } 00:20:15.017 ]' 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:15.017 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.273 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.273 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.273 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.530 07:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:20:16.460 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.460 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.460 07:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.460 07:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.460 07:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.460 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.460 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:16.460 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.718 07:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.652 00:20:17.652 07:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.652 07:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.652 07:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.911 { 00:20:17.911 "cntlid": 91, 00:20:17.911 "qid": 0, 00:20:17.911 "state": "enabled", 00:20:17.911 "thread": "nvmf_tgt_poll_group_000", 00:20:17.911 "listen_address": { 00:20:17.911 "trtype": "TCP", 00:20:17.911 "adrfam": "IPv4", 00:20:17.911 "traddr": "10.0.0.2", 00:20:17.911 "trsvcid": "4420" 00:20:17.911 }, 00:20:17.911 "peer_address": { 00:20:17.911 "trtype": "TCP", 00:20:17.911 "adrfam": "IPv4", 00:20:17.911 "traddr": "10.0.0.1", 00:20:17.911 "trsvcid": "32824" 00:20:17.911 }, 00:20:17.911 "auth": { 00:20:17.911 "state": "completed", 00:20:17.911 "digest": "sha384", 00:20:17.911 "dhgroup": "ffdhe8192" 00:20:17.911 } 00:20:17.911 } 00:20:17.911 ]' 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.911 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.169 07:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:20:19.104 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.104 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.104 07:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.104 07:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.104 07:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.104 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.104 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:19.104 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.362 07:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.295 00:20:20.295 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.295 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.295 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.553 { 00:20:20.553 "cntlid": 93, 00:20:20.553 "qid": 0, 00:20:20.553 "state": "enabled", 00:20:20.553 "thread": "nvmf_tgt_poll_group_000", 00:20:20.553 "listen_address": { 00:20:20.553 "trtype": "TCP", 00:20:20.553 "adrfam": "IPv4", 00:20:20.553 "traddr": "10.0.0.2", 00:20:20.553 "trsvcid": "4420" 00:20:20.553 }, 00:20:20.553 "peer_address": { 00:20:20.553 "trtype": "TCP", 00:20:20.553 "adrfam": "IPv4", 00:20:20.553 "traddr": "10.0.0.1", 00:20:20.553 "trsvcid": "33070" 00:20:20.553 }, 00:20:20.553 "auth": { 00:20:20.553 "state": "completed", 00:20:20.553 "digest": "sha384", 00:20:20.553 "dhgroup": "ffdhe8192" 00:20:20.553 } 00:20:20.553 } 00:20:20.553 ]' 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.553 07:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.811 07:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.811 07:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.811 07:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.069 07:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:20:22.001 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.001 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.001 07:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.001 07:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.001 07:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.002 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.002 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:22.002 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.259 07:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.191 00:20:23.191 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.191 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.191 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.470 { 00:20:23.470 "cntlid": 95, 00:20:23.470 "qid": 0, 00:20:23.470 "state": "enabled", 00:20:23.470 "thread": "nvmf_tgt_poll_group_000", 00:20:23.470 "listen_address": { 00:20:23.470 "trtype": "TCP", 00:20:23.470 "adrfam": "IPv4", 00:20:23.470 "traddr": "10.0.0.2", 00:20:23.470 "trsvcid": "4420" 00:20:23.470 }, 00:20:23.470 "peer_address": { 00:20:23.470 "trtype": "TCP", 00:20:23.470 "adrfam": "IPv4", 00:20:23.470 "traddr": "10.0.0.1", 00:20:23.470 "trsvcid": "33104" 00:20:23.470 }, 00:20:23.470 "auth": { 00:20:23.470 "state": "completed", 00:20:23.470 "digest": "sha384", 00:20:23.470 "dhgroup": "ffdhe8192" 00:20:23.470 } 00:20:23.470 } 00:20:23.470 ]' 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.470 07:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.736 07:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:20:24.668 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.668 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.668 07:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.668 07:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.925 07:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.925 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:24.925 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.925 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.925 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:24.925 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.183 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.441 00:20:25.441 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.441 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.441 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.699 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.699 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.699 07:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.699 07:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.699 07:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.699 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.699 { 00:20:25.699 "cntlid": 97, 00:20:25.699 "qid": 0, 00:20:25.699 "state": "enabled", 00:20:25.699 "thread": "nvmf_tgt_poll_group_000", 00:20:25.699 "listen_address": { 00:20:25.699 "trtype": "TCP", 00:20:25.699 "adrfam": "IPv4", 00:20:25.699 "traddr": "10.0.0.2", 00:20:25.699 "trsvcid": "4420" 00:20:25.699 }, 00:20:25.699 "peer_address": { 00:20:25.699 "trtype": "TCP", 00:20:25.699 "adrfam": "IPv4", 00:20:25.699 "traddr": "10.0.0.1", 00:20:25.699 "trsvcid": "33136" 00:20:25.699 }, 00:20:25.699 "auth": { 00:20:25.699 "state": "completed", 00:20:25.699 "digest": "sha512", 00:20:25.699 "dhgroup": "null" 00:20:25.699 } 00:20:25.699 } 00:20:25.699 ]' 00:20:25.699 07:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.699 07:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.699 07:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.699 07:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:25.699 07:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.700 07:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.700 07:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.700 07:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.958 07:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.331 07:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.332 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.332 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.590 00:20:27.590 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.590 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.590 07:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.848 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.848 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.848 07:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.848 07:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.848 07:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.848 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.848 { 00:20:27.848 "cntlid": 99, 00:20:27.848 "qid": 0, 00:20:27.848 "state": "enabled", 00:20:27.848 "thread": "nvmf_tgt_poll_group_000", 00:20:27.848 "listen_address": { 00:20:27.848 "trtype": "TCP", 00:20:27.848 "adrfam": "IPv4", 00:20:27.848 "traddr": "10.0.0.2", 00:20:27.848 "trsvcid": "4420" 00:20:27.848 }, 00:20:27.848 "peer_address": { 00:20:27.848 "trtype": "TCP", 00:20:27.848 "adrfam": "IPv4", 00:20:27.848 "traddr": "10.0.0.1", 00:20:27.848 "trsvcid": "33168" 00:20:27.848 }, 00:20:27.848 "auth": { 00:20:27.848 "state": "completed", 00:20:27.848 "digest": "sha512", 00:20:27.848 "dhgroup": "null" 00:20:27.848 } 00:20:27.848 } 00:20:27.848 ]' 00:20:27.848 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.848 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.848 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.106 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:28.106 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.106 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.106 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.106 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.363 07:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:20:29.294 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.294 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.294 07:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.294 07:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.294 07:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.294 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.294 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:29.294 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.551 07:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.809 00:20:29.809 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.809 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.809 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.067 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.067 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.067 07:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.067 07:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.067 07:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.067 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.067 { 00:20:30.067 "cntlid": 101, 00:20:30.067 "qid": 0, 00:20:30.067 "state": "enabled", 00:20:30.067 "thread": "nvmf_tgt_poll_group_000", 00:20:30.067 "listen_address": { 00:20:30.067 "trtype": "TCP", 00:20:30.067 "adrfam": "IPv4", 00:20:30.067 "traddr": "10.0.0.2", 00:20:30.067 "trsvcid": "4420" 00:20:30.067 }, 00:20:30.067 "peer_address": { 00:20:30.067 "trtype": "TCP", 00:20:30.067 "adrfam": "IPv4", 00:20:30.067 "traddr": "10.0.0.1", 00:20:30.067 "trsvcid": "42748" 00:20:30.067 }, 00:20:30.067 "auth": { 00:20:30.067 "state": "completed", 00:20:30.067 "digest": "sha512", 00:20:30.067 "dhgroup": "null" 00:20:30.067 } 00:20:30.067 } 00:20:30.067 ]' 00:20:30.067 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.067 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.067 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.325 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:30.325 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.325 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.325 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.325 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.582 07:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:20:31.513 07:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.513 07:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.513 07:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.513 07:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.513 07:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.513 07:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.513 07:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.513 07:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.770 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.027 00:20:32.027 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.027 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.027 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.284 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.284 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.284 07:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.284 07:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.284 07:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.284 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.284 { 00:20:32.284 "cntlid": 103, 00:20:32.284 "qid": 0, 00:20:32.284 "state": "enabled", 00:20:32.284 "thread": "nvmf_tgt_poll_group_000", 00:20:32.284 "listen_address": { 00:20:32.284 "trtype": "TCP", 00:20:32.284 "adrfam": "IPv4", 00:20:32.284 "traddr": "10.0.0.2", 00:20:32.284 "trsvcid": "4420" 00:20:32.284 }, 00:20:32.284 "peer_address": { 00:20:32.284 "trtype": "TCP", 00:20:32.284 "adrfam": "IPv4", 00:20:32.284 "traddr": "10.0.0.1", 00:20:32.284 "trsvcid": "42768" 00:20:32.284 }, 00:20:32.284 "auth": { 00:20:32.284 "state": "completed", 00:20:32.284 "digest": "sha512", 00:20:32.284 "dhgroup": "null" 00:20:32.284 } 00:20:32.284 } 00:20:32.284 ]' 00:20:32.284 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.539 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.540 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.540 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:32.540 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.540 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.540 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.540 07:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.796 07:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:20:33.722 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.722 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.722 07:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.722 07:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.722 07:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.722 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.722 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.722 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:33.722 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.979 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.980 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.544 00:20:34.544 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.544 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.544 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.544 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.544 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.544 07:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.544 07:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.544 07:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.544 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.544 { 00:20:34.544 "cntlid": 105, 00:20:34.544 "qid": 0, 00:20:34.544 "state": "enabled", 00:20:34.544 "thread": "nvmf_tgt_poll_group_000", 00:20:34.544 "listen_address": { 00:20:34.544 "trtype": "TCP", 00:20:34.544 "adrfam": "IPv4", 00:20:34.544 "traddr": "10.0.0.2", 00:20:34.544 "trsvcid": "4420" 00:20:34.544 }, 00:20:34.544 "peer_address": { 00:20:34.544 "trtype": "TCP", 00:20:34.544 "adrfam": "IPv4", 00:20:34.544 "traddr": "10.0.0.1", 00:20:34.544 "trsvcid": "42804" 00:20:34.544 }, 00:20:34.544 "auth": { 00:20:34.544 "state": "completed", 00:20:34.544 "digest": "sha512", 00:20:34.544 "dhgroup": "ffdhe2048" 00:20:34.544 } 00:20:34.544 } 00:20:34.544 ]' 00:20:34.544 07:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.801 07:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.801 07:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.801 07:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.801 07:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.801 07:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.801 07:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.801 07:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.058 07:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:20:35.995 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.995 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.995 07:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.995 07:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.995 07:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.995 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.995 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.995 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.270 07:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.834 00:20:36.835 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.835 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.835 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.835 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.835 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.835 07:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.835 07:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.092 07:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.092 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.092 { 00:20:37.092 "cntlid": 107, 00:20:37.092 "qid": 0, 00:20:37.092 "state": "enabled", 00:20:37.092 "thread": "nvmf_tgt_poll_group_000", 00:20:37.092 "listen_address": { 00:20:37.092 "trtype": "TCP", 00:20:37.092 "adrfam": "IPv4", 00:20:37.092 "traddr": "10.0.0.2", 00:20:37.092 "trsvcid": "4420" 00:20:37.092 }, 00:20:37.092 "peer_address": { 00:20:37.092 "trtype": "TCP", 00:20:37.092 "adrfam": "IPv4", 00:20:37.092 "traddr": "10.0.0.1", 00:20:37.092 "trsvcid": "42834" 00:20:37.092 }, 00:20:37.092 "auth": { 00:20:37.092 "state": "completed", 00:20:37.092 "digest": "sha512", 00:20:37.092 "dhgroup": "ffdhe2048" 00:20:37.092 } 00:20:37.092 } 00:20:37.092 ]' 00:20:37.092 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.092 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.092 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.092 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.092 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.092 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.092 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.092 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.349 07:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:20:38.281 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.281 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.281 07:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.281 07:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.281 07:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.281 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.281 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:38.281 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:38.539 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:38.539 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.539 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.539 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:38.539 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:38.539 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.539 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.539 07:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.539 07:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.797 07:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.797 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.797 07:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.055 00:20:39.055 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.055 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.055 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.312 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.312 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.313 { 00:20:39.313 "cntlid": 109, 00:20:39.313 "qid": 0, 00:20:39.313 "state": "enabled", 00:20:39.313 "thread": "nvmf_tgt_poll_group_000", 00:20:39.313 "listen_address": { 00:20:39.313 "trtype": "TCP", 00:20:39.313 "adrfam": "IPv4", 00:20:39.313 "traddr": "10.0.0.2", 00:20:39.313 "trsvcid": "4420" 00:20:39.313 }, 00:20:39.313 "peer_address": { 00:20:39.313 "trtype": "TCP", 00:20:39.313 "adrfam": "IPv4", 00:20:39.313 "traddr": "10.0.0.1", 00:20:39.313 "trsvcid": "52222" 00:20:39.313 }, 00:20:39.313 "auth": { 00:20:39.313 "state": "completed", 00:20:39.313 "digest": "sha512", 00:20:39.313 "dhgroup": "ffdhe2048" 00:20:39.313 } 00:20:39.313 } 00:20:39.313 ]' 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.313 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.571 07:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:20:40.503 07:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.503 07:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.503 07:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.503 07:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.503 07:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.503 07:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.503 07:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.503 07:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.761 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.019 00:20:41.019 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.019 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.019 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.277 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.277 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.277 07:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 07:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 07:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.277 { 00:20:41.277 "cntlid": 111, 00:20:41.277 "qid": 0, 00:20:41.277 "state": "enabled", 00:20:41.277 "thread": "nvmf_tgt_poll_group_000", 00:20:41.277 "listen_address": { 00:20:41.277 "trtype": "TCP", 00:20:41.277 "adrfam": "IPv4", 00:20:41.277 "traddr": "10.0.0.2", 00:20:41.277 "trsvcid": "4420" 00:20:41.277 }, 00:20:41.277 "peer_address": { 00:20:41.277 "trtype": "TCP", 00:20:41.277 "adrfam": "IPv4", 00:20:41.277 "traddr": "10.0.0.1", 00:20:41.277 "trsvcid": "52256" 00:20:41.277 }, 00:20:41.277 "auth": { 00:20:41.277 "state": "completed", 00:20:41.277 "digest": "sha512", 00:20:41.277 "dhgroup": "ffdhe2048" 00:20:41.277 } 00:20:41.277 } 00:20:41.277 ]' 00:20:41.277 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.534 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.534 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.534 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.534 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.534 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.534 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.534 07:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.792 07:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:20:42.727 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.727 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.727 07:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.727 07:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.727 07:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.727 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.727 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.727 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:42.727 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.985 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.243 00:20:43.243 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.243 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.243 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.501 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.501 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.501 07:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.501 07:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.501 07:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.501 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.501 { 00:20:43.501 "cntlid": 113, 00:20:43.501 "qid": 0, 00:20:43.501 "state": "enabled", 00:20:43.501 "thread": "nvmf_tgt_poll_group_000", 00:20:43.501 "listen_address": { 00:20:43.501 "trtype": "TCP", 00:20:43.501 "adrfam": "IPv4", 00:20:43.501 "traddr": "10.0.0.2", 00:20:43.501 "trsvcid": "4420" 00:20:43.501 }, 00:20:43.501 "peer_address": { 00:20:43.501 "trtype": "TCP", 00:20:43.501 "adrfam": "IPv4", 00:20:43.501 "traddr": "10.0.0.1", 00:20:43.501 "trsvcid": "52288" 00:20:43.501 }, 00:20:43.501 "auth": { 00:20:43.501 "state": "completed", 00:20:43.501 "digest": "sha512", 00:20:43.501 "dhgroup": "ffdhe3072" 00:20:43.501 } 00:20:43.501 } 00:20:43.501 ]' 00:20:43.501 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.501 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.501 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.759 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.759 07:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.759 07:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.759 07:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.759 07:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.016 07:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:20:44.949 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.949 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.949 07:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.949 07:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.950 07:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.950 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.950 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.950 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.207 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.465 00:20:45.465 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.465 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.465 07:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.723 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.723 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.723 07:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.723 07:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.723 07:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.981 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.981 { 00:20:45.981 "cntlid": 115, 00:20:45.981 "qid": 0, 00:20:45.981 "state": "enabled", 00:20:45.981 "thread": "nvmf_tgt_poll_group_000", 00:20:45.981 "listen_address": { 00:20:45.981 "trtype": "TCP", 00:20:45.981 "adrfam": "IPv4", 00:20:45.981 "traddr": "10.0.0.2", 00:20:45.981 "trsvcid": "4420" 00:20:45.981 }, 00:20:45.981 "peer_address": { 00:20:45.981 "trtype": "TCP", 00:20:45.981 "adrfam": "IPv4", 00:20:45.981 "traddr": "10.0.0.1", 00:20:45.981 "trsvcid": "52300" 00:20:45.981 }, 00:20:45.981 "auth": { 00:20:45.981 "state": "completed", 00:20:45.981 "digest": "sha512", 00:20:45.981 "dhgroup": "ffdhe3072" 00:20:45.981 } 00:20:45.981 } 00:20:45.981 ]' 00:20:45.981 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.981 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.981 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.981 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.981 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.981 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.981 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.981 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.239 07:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:20:47.172 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.172 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.172 07:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.172 07:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.172 07:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.172 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.172 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.172 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.430 07:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.996 00:20:47.996 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.996 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.996 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.996 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.996 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.996 07:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.996 07:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.996 07:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.996 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.996 { 00:20:47.996 "cntlid": 117, 00:20:47.996 "qid": 0, 00:20:47.996 "state": "enabled", 00:20:47.996 "thread": "nvmf_tgt_poll_group_000", 00:20:47.996 "listen_address": { 00:20:47.996 "trtype": "TCP", 00:20:47.996 "adrfam": "IPv4", 00:20:47.996 "traddr": "10.0.0.2", 00:20:47.996 "trsvcid": "4420" 00:20:47.996 }, 00:20:47.996 "peer_address": { 00:20:47.996 "trtype": "TCP", 00:20:47.996 "adrfam": "IPv4", 00:20:47.996 "traddr": "10.0.0.1", 00:20:47.996 "trsvcid": "52324" 00:20:47.996 }, 00:20:47.996 "auth": { 00:20:47.996 "state": "completed", 00:20:47.996 "digest": "sha512", 00:20:47.996 "dhgroup": "ffdhe3072" 00:20:47.996 } 00:20:47.996 } 00:20:47.996 ]' 00:20:47.996 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.256 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.256 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.256 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.256 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.256 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.256 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.256 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.519 07:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:20:49.452 07:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.452 07:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.452 07:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.452 07:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.452 07:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.452 07:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.452 07:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:49.452 07:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.710 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.276 00:20:50.276 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.276 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.276 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.276 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.276 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.276 07:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.276 07:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.533 07:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.533 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.533 { 00:20:50.533 "cntlid": 119, 00:20:50.533 "qid": 0, 00:20:50.533 "state": "enabled", 00:20:50.533 "thread": "nvmf_tgt_poll_group_000", 00:20:50.533 "listen_address": { 00:20:50.533 "trtype": "TCP", 00:20:50.533 "adrfam": "IPv4", 00:20:50.533 "traddr": "10.0.0.2", 00:20:50.533 "trsvcid": "4420" 00:20:50.533 }, 00:20:50.533 "peer_address": { 00:20:50.533 "trtype": "TCP", 00:20:50.533 "adrfam": "IPv4", 00:20:50.533 "traddr": "10.0.0.1", 00:20:50.533 "trsvcid": "55584" 00:20:50.533 }, 00:20:50.533 "auth": { 00:20:50.533 "state": "completed", 00:20:50.533 "digest": "sha512", 00:20:50.533 "dhgroup": "ffdhe3072" 00:20:50.533 } 00:20:50.533 } 00:20:50.533 ]' 00:20:50.533 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.533 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.533 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.533 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:50.533 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.533 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.533 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.533 07:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.791 07:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:20:51.722 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.722 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.722 07:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.722 07:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.722 07:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.722 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.722 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.722 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:51.722 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.980 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.544 00:20:52.544 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.544 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.544 07:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.801 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.801 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.801 07:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.801 07:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.801 07:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.801 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.802 { 00:20:52.802 "cntlid": 121, 00:20:52.802 "qid": 0, 00:20:52.802 "state": "enabled", 00:20:52.802 "thread": "nvmf_tgt_poll_group_000", 00:20:52.802 "listen_address": { 00:20:52.802 "trtype": "TCP", 00:20:52.802 "adrfam": "IPv4", 00:20:52.802 "traddr": "10.0.0.2", 00:20:52.802 "trsvcid": "4420" 00:20:52.802 }, 00:20:52.802 "peer_address": { 00:20:52.802 "trtype": "TCP", 00:20:52.802 "adrfam": "IPv4", 00:20:52.802 "traddr": "10.0.0.1", 00:20:52.802 "trsvcid": "55614" 00:20:52.802 }, 00:20:52.802 "auth": { 00:20:52.802 "state": "completed", 00:20:52.802 "digest": "sha512", 00:20:52.802 "dhgroup": "ffdhe4096" 00:20:52.802 } 00:20:52.802 } 00:20:52.802 ]' 00:20:52.802 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.802 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.802 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.802 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.802 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.802 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.802 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.802 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.060 07:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:20:53.990 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.990 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.990 07:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.990 07:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.990 07:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.990 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.990 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:53.990 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.247 07:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.812 00:20:54.812 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.812 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.812 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.068 { 00:20:55.068 "cntlid": 123, 00:20:55.068 "qid": 0, 00:20:55.068 "state": "enabled", 00:20:55.068 "thread": "nvmf_tgt_poll_group_000", 00:20:55.068 "listen_address": { 00:20:55.068 "trtype": "TCP", 00:20:55.068 "adrfam": "IPv4", 00:20:55.068 "traddr": "10.0.0.2", 00:20:55.068 "trsvcid": "4420" 00:20:55.068 }, 00:20:55.068 "peer_address": { 00:20:55.068 "trtype": "TCP", 00:20:55.068 "adrfam": "IPv4", 00:20:55.068 "traddr": "10.0.0.1", 00:20:55.068 "trsvcid": "55646" 00:20:55.068 }, 00:20:55.068 "auth": { 00:20:55.068 "state": "completed", 00:20:55.068 "digest": "sha512", 00:20:55.068 "dhgroup": "ffdhe4096" 00:20:55.068 } 00:20:55.068 } 00:20:55.068 ]' 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.068 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.069 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.069 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.069 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.326 07:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:20:56.257 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.257 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.257 07:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.257 07:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.257 07:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.257 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.257 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:56.257 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.516 07:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.081 00:20:57.081 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.081 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.081 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.081 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.081 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.081 07:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.081 07:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.338 07:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.338 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.338 { 00:20:57.338 "cntlid": 125, 00:20:57.338 "qid": 0, 00:20:57.338 "state": "enabled", 00:20:57.338 "thread": "nvmf_tgt_poll_group_000", 00:20:57.338 "listen_address": { 00:20:57.338 "trtype": "TCP", 00:20:57.338 "adrfam": "IPv4", 00:20:57.338 "traddr": "10.0.0.2", 00:20:57.338 "trsvcid": "4420" 00:20:57.338 }, 00:20:57.338 "peer_address": { 00:20:57.338 "trtype": "TCP", 00:20:57.338 "adrfam": "IPv4", 00:20:57.338 "traddr": "10.0.0.1", 00:20:57.338 "trsvcid": "55662" 00:20:57.338 }, 00:20:57.338 "auth": { 00:20:57.338 "state": "completed", 00:20:57.338 "digest": "sha512", 00:20:57.338 "dhgroup": "ffdhe4096" 00:20:57.338 } 00:20:57.338 } 00:20:57.338 ]' 00:20:57.338 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.339 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.339 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.339 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.339 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.339 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.339 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.339 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.596 07:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:20:58.527 07:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.527 07:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.527 07:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.527 07:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.527 07:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.527 07:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.527 07:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.527 07:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.784 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:58.784 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.784 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:58.784 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:58.784 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:58.785 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.785 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:58.785 07:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.785 07:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.785 07:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.785 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.785 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.350 00:20:59.350 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.350 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.350 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.609 { 00:20:59.609 "cntlid": 127, 00:20:59.609 "qid": 0, 00:20:59.609 "state": "enabled", 00:20:59.609 "thread": "nvmf_tgt_poll_group_000", 00:20:59.609 "listen_address": { 00:20:59.609 "trtype": "TCP", 00:20:59.609 "adrfam": "IPv4", 00:20:59.609 "traddr": "10.0.0.2", 00:20:59.609 "trsvcid": "4420" 00:20:59.609 }, 00:20:59.609 "peer_address": { 00:20:59.609 "trtype": "TCP", 00:20:59.609 "adrfam": "IPv4", 00:20:59.609 "traddr": "10.0.0.1", 00:20:59.609 "trsvcid": "33348" 00:20:59.609 }, 00:20:59.609 "auth": { 00:20:59.609 "state": "completed", 00:20:59.609 "digest": "sha512", 00:20:59.609 "dhgroup": "ffdhe4096" 00:20:59.609 } 00:20:59.609 } 00:20:59.609 ]' 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.609 07:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.867 07:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:21:00.800 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.800 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.800 07:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.800 07:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.800 07:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.800 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.800 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.800 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:00.800 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.060 07:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.636 00:21:01.636 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.636 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.636 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.895 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.895 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.895 07:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.895 07:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.895 07:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.895 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.895 { 00:21:01.895 "cntlid": 129, 00:21:01.895 "qid": 0, 00:21:01.895 "state": "enabled", 00:21:01.895 "thread": "nvmf_tgt_poll_group_000", 00:21:01.895 "listen_address": { 00:21:01.895 "trtype": "TCP", 00:21:01.895 "adrfam": "IPv4", 00:21:01.895 "traddr": "10.0.0.2", 00:21:01.895 "trsvcid": "4420" 00:21:01.895 }, 00:21:01.895 "peer_address": { 00:21:01.895 "trtype": "TCP", 00:21:01.895 "adrfam": "IPv4", 00:21:01.895 "traddr": "10.0.0.1", 00:21:01.895 "trsvcid": "33392" 00:21:01.895 }, 00:21:01.895 "auth": { 00:21:01.895 "state": "completed", 00:21:01.895 "digest": "sha512", 00:21:01.895 "dhgroup": "ffdhe6144" 00:21:01.895 } 00:21:01.895 } 00:21:01.895 ]' 00:21:01.895 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.154 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.154 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.154 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.154 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.154 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.154 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.154 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.413 07:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:21:03.346 07:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.346 07:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.346 07:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.346 07:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.346 07:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.346 07:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.346 07:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.346 07:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.602 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.166 00:21:04.166 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.166 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.166 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.424 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.424 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.424 07:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.424 07:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.424 07:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.424 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.424 { 00:21:04.424 "cntlid": 131, 00:21:04.424 "qid": 0, 00:21:04.424 "state": "enabled", 00:21:04.424 "thread": "nvmf_tgt_poll_group_000", 00:21:04.424 "listen_address": { 00:21:04.424 "trtype": "TCP", 00:21:04.424 "adrfam": "IPv4", 00:21:04.424 "traddr": "10.0.0.2", 00:21:04.424 "trsvcid": "4420" 00:21:04.424 }, 00:21:04.424 "peer_address": { 00:21:04.424 "trtype": "TCP", 00:21:04.424 "adrfam": "IPv4", 00:21:04.424 "traddr": "10.0.0.1", 00:21:04.424 "trsvcid": "33440" 00:21:04.424 }, 00:21:04.424 "auth": { 00:21:04.424 "state": "completed", 00:21:04.424 "digest": "sha512", 00:21:04.424 "dhgroup": "ffdhe6144" 00:21:04.424 } 00:21:04.424 } 00:21:04.424 ]' 00:21:04.424 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.683 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.683 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.683 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.683 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.683 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.683 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.683 07:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.940 07:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:21:05.873 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.874 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.874 07:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.874 07:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.874 07:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.874 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.874 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.874 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.132 07:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.697 00:21:06.697 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.697 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.697 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.955 { 00:21:06.955 "cntlid": 133, 00:21:06.955 "qid": 0, 00:21:06.955 "state": "enabled", 00:21:06.955 "thread": "nvmf_tgt_poll_group_000", 00:21:06.955 "listen_address": { 00:21:06.955 "trtype": "TCP", 00:21:06.955 "adrfam": "IPv4", 00:21:06.955 "traddr": "10.0.0.2", 00:21:06.955 "trsvcid": "4420" 00:21:06.955 }, 00:21:06.955 "peer_address": { 00:21:06.955 "trtype": "TCP", 00:21:06.955 "adrfam": "IPv4", 00:21:06.955 "traddr": "10.0.0.1", 00:21:06.955 "trsvcid": "33480" 00:21:06.955 }, 00:21:06.955 "auth": { 00:21:06.955 "state": "completed", 00:21:06.955 "digest": "sha512", 00:21:06.955 "dhgroup": "ffdhe6144" 00:21:06.955 } 00:21:06.955 } 00:21:06.955 ]' 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.955 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.213 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.213 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.213 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.213 07:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.586 07:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.150 00:21:09.150 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.150 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.150 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.408 { 00:21:09.408 "cntlid": 135, 00:21:09.408 "qid": 0, 00:21:09.408 "state": "enabled", 00:21:09.408 "thread": "nvmf_tgt_poll_group_000", 00:21:09.408 "listen_address": { 00:21:09.408 "trtype": "TCP", 00:21:09.408 "adrfam": "IPv4", 00:21:09.408 "traddr": "10.0.0.2", 00:21:09.408 "trsvcid": "4420" 00:21:09.408 }, 00:21:09.408 "peer_address": { 00:21:09.408 "trtype": "TCP", 00:21:09.408 "adrfam": "IPv4", 00:21:09.408 "traddr": "10.0.0.1", 00:21:09.408 "trsvcid": "50150" 00:21:09.408 }, 00:21:09.408 "auth": { 00:21:09.408 "state": "completed", 00:21:09.408 "digest": "sha512", 00:21:09.408 "dhgroup": "ffdhe6144" 00:21:09.408 } 00:21:09.408 } 00:21:09.408 ]' 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.408 07:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.665 07:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.039 07:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.040 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.040 07:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.972 00:21:11.972 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.972 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.972 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.229 { 00:21:12.229 "cntlid": 137, 00:21:12.229 "qid": 0, 00:21:12.229 "state": "enabled", 00:21:12.229 "thread": "nvmf_tgt_poll_group_000", 00:21:12.229 "listen_address": { 00:21:12.229 "trtype": "TCP", 00:21:12.229 "adrfam": "IPv4", 00:21:12.229 "traddr": "10.0.0.2", 00:21:12.229 "trsvcid": "4420" 00:21:12.229 }, 00:21:12.229 "peer_address": { 00:21:12.229 "trtype": "TCP", 00:21:12.229 "adrfam": "IPv4", 00:21:12.229 "traddr": "10.0.0.1", 00:21:12.229 "trsvcid": "50172" 00:21:12.229 }, 00:21:12.229 "auth": { 00:21:12.229 "state": "completed", 00:21:12.229 "digest": "sha512", 00:21:12.229 "dhgroup": "ffdhe8192" 00:21:12.229 } 00:21:12.229 } 00:21:12.229 ]' 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.229 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.485 07:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:21:13.849 07:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.849 07:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.849 07:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.849 07:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.849 07:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.849 07:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.849 07:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.849 07:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.849 07:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:13.849 07:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.849 07:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.849 07:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:13.849 07:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:13.849 07:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.850 07:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.850 07:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.850 07:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.850 07:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.850 07:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.850 07:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.812 00:21:14.812 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.812 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.812 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.070 { 00:21:15.070 "cntlid": 139, 00:21:15.070 "qid": 0, 00:21:15.070 "state": "enabled", 00:21:15.070 "thread": "nvmf_tgt_poll_group_000", 00:21:15.070 "listen_address": { 00:21:15.070 "trtype": "TCP", 00:21:15.070 "adrfam": "IPv4", 00:21:15.070 "traddr": "10.0.0.2", 00:21:15.070 "trsvcid": "4420" 00:21:15.070 }, 00:21:15.070 "peer_address": { 00:21:15.070 "trtype": "TCP", 00:21:15.070 "adrfam": "IPv4", 00:21:15.070 "traddr": "10.0.0.1", 00:21:15.070 "trsvcid": "50198" 00:21:15.070 }, 00:21:15.070 "auth": { 00:21:15.070 "state": "completed", 00:21:15.070 "digest": "sha512", 00:21:15.070 "dhgroup": "ffdhe8192" 00:21:15.070 } 00:21:15.070 } 00:21:15.070 ]' 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.070 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.327 07:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NmMxMjMzMjJhNDIwM2FiZTQwOTU3OGY5YmQwODY4ZWYKs8kF: --dhchap-ctrl-secret DHHC-1:02:ODAyZTlmNmRlMjM3YTkzZTMwYTI1YzdjOGRmODFhY2Y4ZjliMzk4ODY3Nzc2NGI2IgYeQQ==: 00:21:16.260 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.260 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.260 07:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.260 07:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.260 07:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.260 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.260 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.260 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.517 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:16.518 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.518 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.518 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:16.518 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:16.518 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.518 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.518 07:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.518 07:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.775 07:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.775 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.775 07:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.707 00:21:17.707 07:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.707 07:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.707 07:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.707 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.707 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.707 07:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.707 07:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.707 07:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.707 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.707 { 00:21:17.707 "cntlid": 141, 00:21:17.707 "qid": 0, 00:21:17.707 "state": "enabled", 00:21:17.707 "thread": "nvmf_tgt_poll_group_000", 00:21:17.707 "listen_address": { 00:21:17.707 "trtype": "TCP", 00:21:17.707 "adrfam": "IPv4", 00:21:17.707 "traddr": "10.0.0.2", 00:21:17.707 "trsvcid": "4420" 00:21:17.707 }, 00:21:17.707 "peer_address": { 00:21:17.707 "trtype": "TCP", 00:21:17.707 "adrfam": "IPv4", 00:21:17.707 "traddr": "10.0.0.1", 00:21:17.707 "trsvcid": "50236" 00:21:17.707 }, 00:21:17.707 "auth": { 00:21:17.707 "state": "completed", 00:21:17.707 "digest": "sha512", 00:21:17.707 "dhgroup": "ffdhe8192" 00:21:17.707 } 00:21:17.707 } 00:21:17.707 ]' 00:21:17.707 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.964 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.965 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.965 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.965 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.965 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.965 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.965 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.222 07:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzBlNjkwY2Y5YTg3YzY2MTk3NzYyNDljZjI0ZDg1YWIwYjAwMjVjZDliZjAxZGU20LwWdg==: --dhchap-ctrl-secret DHHC-1:01:ZTgyMTMyYjBjYmVjNjY3ZjA2NmQ0ZGI1ZGM3Zjk4OGFDBohw: 00:21:19.158 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.158 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.158 07:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.158 07:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.158 07:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.158 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.158 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.158 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.416 07:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.349 00:21:20.349 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.349 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.349 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.608 { 00:21:20.608 "cntlid": 143, 00:21:20.608 "qid": 0, 00:21:20.608 "state": "enabled", 00:21:20.608 "thread": "nvmf_tgt_poll_group_000", 00:21:20.608 "listen_address": { 00:21:20.608 "trtype": "TCP", 00:21:20.608 "adrfam": "IPv4", 00:21:20.608 "traddr": "10.0.0.2", 00:21:20.608 "trsvcid": "4420" 00:21:20.608 }, 00:21:20.608 "peer_address": { 00:21:20.608 "trtype": "TCP", 00:21:20.608 "adrfam": "IPv4", 00:21:20.608 "traddr": "10.0.0.1", 00:21:20.608 "trsvcid": "60696" 00:21:20.608 }, 00:21:20.608 "auth": { 00:21:20.608 "state": "completed", 00:21:20.608 "digest": "sha512", 00:21:20.608 "dhgroup": "ffdhe8192" 00:21:20.608 } 00:21:20.608 } 00:21:20.608 ]' 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.608 07:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.866 07:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:21.800 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.058 07:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.992 00:21:22.992 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.992 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.992 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.251 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.251 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.251 07:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.251 07:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.251 07:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.251 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.251 { 00:21:23.251 "cntlid": 145, 00:21:23.251 "qid": 0, 00:21:23.251 "state": "enabled", 00:21:23.251 "thread": "nvmf_tgt_poll_group_000", 00:21:23.251 "listen_address": { 00:21:23.251 "trtype": "TCP", 00:21:23.251 "adrfam": "IPv4", 00:21:23.251 "traddr": "10.0.0.2", 00:21:23.251 "trsvcid": "4420" 00:21:23.251 }, 00:21:23.251 "peer_address": { 00:21:23.251 "trtype": "TCP", 00:21:23.251 "adrfam": "IPv4", 00:21:23.251 "traddr": "10.0.0.1", 00:21:23.251 "trsvcid": "60738" 00:21:23.251 }, 00:21:23.251 "auth": { 00:21:23.251 "state": "completed", 00:21:23.251 "digest": "sha512", 00:21:23.251 "dhgroup": "ffdhe8192" 00:21:23.251 } 00:21:23.251 } 00:21:23.251 ]' 00:21:23.251 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.251 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.251 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.509 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.509 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.509 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.509 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.509 07:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.765 07:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NTJlZGRlYzYxODRkYjIyY2FjNDUzYzZhNmY0MzQ5MjU1NzIzNDY1ODg0OTkwZGJlyJTZBQ==: --dhchap-ctrl-secret DHHC-1:03:OGVhN2JmZGQ0YzA3NDhmZDBmNDZmNWE2NWM3MTE5YjYxNWU5MWQ3OTZjNzNmM2YwMTdmOTQxZmRlYjY3YmJmOcAPyRg=: 00:21:24.696 07:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.696 07:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.696 07:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.696 07:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.696 07:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.696 07:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:24.696 07:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.696 07:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.696 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.696 07:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:24.696 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:24.696 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:24.696 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:24.696 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:24.696 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:24.696 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:24.696 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:24.696 07:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:25.630 request: 00:21:25.630 { 00:21:25.630 "name": "nvme0", 00:21:25.630 "trtype": "tcp", 00:21:25.630 "traddr": "10.0.0.2", 00:21:25.630 "adrfam": "ipv4", 00:21:25.630 "trsvcid": "4420", 00:21:25.630 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:25.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:25.630 "prchk_reftag": false, 00:21:25.630 "prchk_guard": false, 00:21:25.630 "hdgst": false, 00:21:25.630 "ddgst": false, 00:21:25.630 "dhchap_key": "key2", 00:21:25.630 "method": "bdev_nvme_attach_controller", 00:21:25.630 "req_id": 1 00:21:25.630 } 00:21:25.630 Got JSON-RPC error response 00:21:25.630 response: 00:21:25.630 { 00:21:25.630 "code": -5, 00:21:25.630 "message": "Input/output error" 00:21:25.630 } 00:21:25.630 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:25.630 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:25.630 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:25.630 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:25.630 07:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.630 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.630 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.630 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:25.631 07:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:26.564 request: 00:21:26.564 { 00:21:26.564 "name": "nvme0", 00:21:26.564 "trtype": "tcp", 00:21:26.564 "traddr": "10.0.0.2", 00:21:26.564 "adrfam": "ipv4", 00:21:26.564 "trsvcid": "4420", 00:21:26.564 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:26.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.564 "prchk_reftag": false, 00:21:26.564 "prchk_guard": false, 00:21:26.564 "hdgst": false, 00:21:26.564 "ddgst": false, 00:21:26.564 "dhchap_key": "key1", 00:21:26.564 "dhchap_ctrlr_key": "ckey2", 00:21:26.564 "method": "bdev_nvme_attach_controller", 00:21:26.564 "req_id": 1 00:21:26.564 } 00:21:26.564 Got JSON-RPC error response 00:21:26.564 response: 00:21:26.564 { 00:21:26.564 "code": -5, 00:21:26.564 "message": "Input/output error" 00:21:26.564 } 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.564 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.565 07:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.565 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:26.565 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.565 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:26.565 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.565 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:26.565 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.565 07:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.565 07:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.130 request: 00:21:27.130 { 00:21:27.130 "name": "nvme0", 00:21:27.130 "trtype": "tcp", 00:21:27.130 "traddr": "10.0.0.2", 00:21:27.130 "adrfam": "ipv4", 00:21:27.130 "trsvcid": "4420", 00:21:27.130 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:27.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.130 "prchk_reftag": false, 00:21:27.130 "prchk_guard": false, 00:21:27.130 "hdgst": false, 00:21:27.130 "ddgst": false, 00:21:27.130 "dhchap_key": "key1", 00:21:27.130 "dhchap_ctrlr_key": "ckey1", 00:21:27.130 "method": "bdev_nvme_attach_controller", 00:21:27.130 "req_id": 1 00:21:27.130 } 00:21:27.130 Got JSON-RPC error response 00:21:27.130 response: 00:21:27.130 { 00:21:27.130 "code": -5, 00:21:27.130 "message": "Input/output error" 00:21:27.130 } 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1522052 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1522052 ']' 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1522052 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1522052 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1522052' 00:21:27.130 killing process with pid 1522052 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1522052 00:21:27.130 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1522052 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1544671 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1544671 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1544671 ']' 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.390 07:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1544671 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1544671 ']' 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.653 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.911 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.911 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:27.911 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:27.911 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.911 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.169 07:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.102 00:21:29.102 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.102 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.102 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.360 { 00:21:29.360 "cntlid": 1, 00:21:29.360 "qid": 0, 00:21:29.360 "state": "enabled", 00:21:29.360 "thread": "nvmf_tgt_poll_group_000", 00:21:29.360 "listen_address": { 00:21:29.360 "trtype": "TCP", 00:21:29.360 "adrfam": "IPv4", 00:21:29.360 "traddr": "10.0.0.2", 00:21:29.360 "trsvcid": "4420" 00:21:29.360 }, 00:21:29.360 "peer_address": { 00:21:29.360 "trtype": "TCP", 00:21:29.360 "adrfam": "IPv4", 00:21:29.360 "traddr": "10.0.0.1", 00:21:29.360 "trsvcid": "57006" 00:21:29.360 }, 00:21:29.360 "auth": { 00:21:29.360 "state": "completed", 00:21:29.360 "digest": "sha512", 00:21:29.360 "dhgroup": "ffdhe8192" 00:21:29.360 } 00:21:29.360 } 00:21:29.360 ]' 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.360 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.617 07:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTRkYmFjNTRlNDNiNWU4YWZlYzA5ZmU1MmU5OTQwYjQ2ZmQzNGI0YmQ5YTlkMzgwMjNiMGFjZGU5YjE0NGJiYaTDZx0=: 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:30.549 07:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:30.808 07:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.808 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:30.808 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.808 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:30.808 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.808 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:30.808 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.808 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.808 07:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.065 request: 00:21:31.065 { 00:21:31.065 "name": "nvme0", 00:21:31.065 "trtype": "tcp", 00:21:31.065 "traddr": "10.0.0.2", 00:21:31.065 "adrfam": "ipv4", 00:21:31.065 "trsvcid": "4420", 00:21:31.065 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:31.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.065 "prchk_reftag": false, 00:21:31.065 "prchk_guard": false, 00:21:31.065 "hdgst": false, 00:21:31.065 "ddgst": false, 00:21:31.065 "dhchap_key": "key3", 00:21:31.065 "method": "bdev_nvme_attach_controller", 00:21:31.065 "req_id": 1 00:21:31.065 } 00:21:31.065 Got JSON-RPC error response 00:21:31.065 response: 00:21:31.065 { 00:21:31.065 "code": -5, 00:21:31.065 "message": "Input/output error" 00:21:31.065 } 00:21:31.065 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:31.065 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.065 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.065 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.065 07:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:31.065 07:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:31.065 07:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:31.065 07:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:31.322 07:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.322 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:31.322 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.322 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:31.322 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.322 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:31.322 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.323 07:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.323 07:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.580 request: 00:21:31.580 { 00:21:31.580 "name": "nvme0", 00:21:31.580 "trtype": "tcp", 00:21:31.580 "traddr": "10.0.0.2", 00:21:31.580 "adrfam": "ipv4", 00:21:31.580 "trsvcid": "4420", 00:21:31.580 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:31.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.580 "prchk_reftag": false, 00:21:31.580 "prchk_guard": false, 00:21:31.580 "hdgst": false, 00:21:31.580 "ddgst": false, 00:21:31.580 "dhchap_key": "key3", 00:21:31.580 "method": "bdev_nvme_attach_controller", 00:21:31.580 "req_id": 1 00:21:31.580 } 00:21:31.580 Got JSON-RPC error response 00:21:31.580 response: 00:21:31.580 { 00:21:31.580 "code": -5, 00:21:31.580 "message": "Input/output error" 00:21:31.580 } 00:21:31.580 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:31.580 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.580 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.580 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.840 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.106 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.106 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:32.106 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:32.106 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:32.106 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:32.106 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:32.106 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:32.106 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:32.106 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:32.106 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:32.106 request: 00:21:32.106 { 00:21:32.106 "name": "nvme0", 00:21:32.106 "trtype": "tcp", 00:21:32.106 "traddr": "10.0.0.2", 00:21:32.106 "adrfam": "ipv4", 00:21:32.106 "trsvcid": "4420", 00:21:32.106 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:32.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.106 "prchk_reftag": false, 00:21:32.106 "prchk_guard": false, 00:21:32.106 "hdgst": false, 00:21:32.106 "ddgst": false, 00:21:32.106 "dhchap_key": "key0", 00:21:32.106 "dhchap_ctrlr_key": "key1", 00:21:32.106 "method": "bdev_nvme_attach_controller", 00:21:32.106 "req_id": 1 00:21:32.106 } 00:21:32.106 Got JSON-RPC error response 00:21:32.106 response: 00:21:32.106 { 00:21:32.106 "code": -5, 00:21:32.106 "message": "Input/output error" 00:21:32.106 } 00:21:32.363 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:32.363 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:32.363 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:32.363 07:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:32.363 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:32.363 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:32.620 00:21:32.620 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:32.620 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.620 07:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:32.877 07:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.877 07:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.877 07:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1522195 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1522195 ']' 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1522195 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1522195 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1522195' 00:21:33.135 killing process with pid 1522195 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1522195 00:21:33.135 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1522195 00:21:33.412 07:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:33.412 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:33.413 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:33.413 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.413 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:33.413 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.413 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.413 rmmod nvme_tcp 00:21:33.413 rmmod nvme_fabrics 00:21:33.413 rmmod nvme_keyring 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1544671 ']' 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1544671 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1544671 ']' 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1544671 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1544671 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1544671' 00:21:33.670 killing process with pid 1544671 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1544671 00:21:33.670 07:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1544671 00:21:33.927 07:09:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:33.927 07:09:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.927 07:09:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.927 07:09:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.927 07:09:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.927 07:09:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.927 07:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.927 07:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.830 07:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:35.830 07:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1Yz /tmp/spdk.key-sha256.Ap4 /tmp/spdk.key-sha384.xWb /tmp/spdk.key-sha512.qfZ /tmp/spdk.key-sha512.q9C /tmp/spdk.key-sha384.X9N /tmp/spdk.key-sha256.ZmH '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:35.830 00:21:35.830 real 3m9.520s 00:21:35.830 user 7m20.842s 00:21:35.830 sys 0m24.876s 00:21:35.830 07:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.830 07:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.830 ************************************ 00:21:35.830 END TEST nvmf_auth_target 00:21:35.830 ************************************ 00:21:35.830 07:09:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:35.830 07:09:05 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:35.830 07:09:05 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:35.830 07:09:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:35.830 07:09:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.830 07:09:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:35.830 ************************************ 00:21:35.830 START TEST nvmf_bdevio_no_huge 00:21:35.830 ************************************ 00:21:35.830 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:35.830 * Looking for test storage... 00:21:36.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.087 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:36.088 07:09:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.985 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:37.986 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:37.986 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:37.986 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:37.986 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.986 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:38.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:21:38.244 00:21:38.244 --- 10.0.0.2 ping statistics --- 00:21:38.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.244 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:21:38.244 00:21:38.244 --- 10.0.0.1 ping statistics --- 00:21:38.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.244 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1547698 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1547698 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1547698 ']' 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.244 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.244 [2024-07-13 07:09:07.594657] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:38.244 [2024-07-13 07:09:07.594758] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:38.245 [2024-07-13 07:09:07.653756] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:38.245 [2024-07-13 07:09:07.676424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.502 [2024-07-13 07:09:07.766816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.502 [2024-07-13 07:09:07.766897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.502 [2024-07-13 07:09:07.766917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.502 [2024-07-13 07:09:07.766932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.502 [2024-07-13 07:09:07.766943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.502 [2024-07-13 07:09:07.767031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:38.502 [2024-07-13 07:09:07.767104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:38.502 [2024-07-13 07:09:07.767163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:38.502 [2024-07-13 07:09:07.767166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.502 [2024-07-13 07:09:07.890474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.502 Malloc0 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.502 [2024-07-13 07:09:07.928229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.502 { 00:21:38.502 "params": { 00:21:38.502 "name": "Nvme$subsystem", 00:21:38.502 "trtype": "$TEST_TRANSPORT", 00:21:38.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.502 "adrfam": "ipv4", 00:21:38.502 "trsvcid": "$NVMF_PORT", 00:21:38.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.502 "hdgst": ${hdgst:-false}, 00:21:38.502 "ddgst": ${ddgst:-false} 00:21:38.502 }, 00:21:38.502 "method": "bdev_nvme_attach_controller" 00:21:38.502 } 00:21:38.502 EOF 00:21:38.502 )") 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:38.502 07:09:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:38.502 "params": { 00:21:38.502 "name": "Nvme1", 00:21:38.502 "trtype": "tcp", 00:21:38.502 "traddr": "10.0.0.2", 00:21:38.502 "adrfam": "ipv4", 00:21:38.502 "trsvcid": "4420", 00:21:38.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.502 "hdgst": false, 00:21:38.502 "ddgst": false 00:21:38.502 }, 00:21:38.502 "method": "bdev_nvme_attach_controller" 00:21:38.502 }' 00:21:38.759 [2024-07-13 07:09:07.973975] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:38.759 [2024-07-13 07:09:07.974064] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1547871 ] 00:21:38.759 [2024-07-13 07:09:08.017262] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:38.759 [2024-07-13 07:09:08.037515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:38.759 [2024-07-13 07:09:08.124041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.759 [2024-07-13 07:09:08.124063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.760 [2024-07-13 07:09:08.124065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.017 I/O targets: 00:21:39.017 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:39.017 00:21:39.017 00:21:39.017 CUnit - A unit testing framework for C - Version 2.1-3 00:21:39.017 http://cunit.sourceforge.net/ 00:21:39.017 00:21:39.017 00:21:39.017 Suite: bdevio tests on: Nvme1n1 00:21:39.274 Test: blockdev write read block ...passed 00:21:39.274 Test: blockdev write zeroes read block ...passed 00:21:39.274 Test: blockdev write zeroes read no split ...passed 00:21:39.274 Test: blockdev write zeroes read split ...passed 00:21:39.274 Test: blockdev write zeroes read split partial ...passed 00:21:39.274 Test: blockdev reset ...[2024-07-13 07:09:08.647315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:39.274 [2024-07-13 07:09:08.647424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb36330 (9): Bad file descriptor 00:21:39.274 [2024-07-13 07:09:08.703358] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:39.274 passed 00:21:39.274 Test: blockdev write read 8 blocks ...passed 00:21:39.274 Test: blockdev write read size > 128k ...passed 00:21:39.274 Test: blockdev write read invalid size ...passed 00:21:39.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:39.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:39.531 Test: blockdev write read max offset ...passed 00:21:39.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:39.531 Test: blockdev writev readv 8 blocks ...passed 00:21:39.531 Test: blockdev writev readv 30 x 1block ...passed 00:21:39.532 Test: blockdev writev readv block ...passed 00:21:39.532 Test: blockdev writev readv size > 128k ...passed 00:21:39.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:39.532 Test: blockdev comparev and writev ...[2024-07-13 07:09:08.957456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:39.532 [2024-07-13 07:09:08.957491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.532 [2024-07-13 07:09:08.957515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:39.532 [2024-07-13 07:09:08.957532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.532 [2024-07-13 07:09:08.957902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:39.532 [2024-07-13 07:09:08.957928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:39.532 [2024-07-13 07:09:08.957949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:39.532 [2024-07-13 07:09:08.957966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:39.532 [2024-07-13 07:09:08.958322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:39.532 [2024-07-13 07:09:08.958348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:39.532 [2024-07-13 07:09:08.958372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:39.532 [2024-07-13 07:09:08.958388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:39.532 [2024-07-13 07:09:08.958747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:39.532 [2024-07-13 07:09:08.958776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:39.532 [2024-07-13 07:09:08.958800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:39.532 [2024-07-13 07:09:08.958816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:39.790 passed 00:21:39.790 Test: blockdev nvme passthru rw ...passed 00:21:39.790 Test: blockdev nvme passthru vendor specific ...[2024-07-13 07:09:09.043165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:39.790 [2024-07-13 07:09:09.043193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:39.790 [2024-07-13 07:09:09.043368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:39.790 [2024-07-13 07:09:09.043391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:39.790 [2024-07-13 07:09:09.043563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:39.790 [2024-07-13 07:09:09.043585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:39.790 [2024-07-13 07:09:09.043757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:39.790 [2024-07-13 07:09:09.043780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:39.790 passed 00:21:39.790 Test: blockdev nvme admin passthru ...passed 00:21:39.790 Test: blockdev copy ...passed 00:21:39.790 00:21:39.790 Run Summary: Type Total Ran Passed Failed Inactive 00:21:39.790 suites 1 1 n/a 0 0 00:21:39.790 tests 23 23 23 0 0 00:21:39.790 asserts 152 152 152 0 n/a 00:21:39.790 00:21:39.790 Elapsed time = 1.330 seconds 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.047 rmmod nvme_tcp 00:21:40.047 rmmod nvme_fabrics 00:21:40.047 rmmod nvme_keyring 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1547698 ']' 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1547698 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1547698 ']' 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1547698 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.047 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1547698 00:21:40.304 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:40.304 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:40.304 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1547698' 00:21:40.304 killing process with pid 1547698 00:21:40.304 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1547698 00:21:40.304 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1547698 00:21:40.564 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.564 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.564 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.564 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.564 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.564 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.564 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.564 07:09:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.091 07:09:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:43.091 00:21:43.091 real 0m6.692s 00:21:43.091 user 0m11.518s 00:21:43.091 sys 0m2.580s 00:21:43.091 07:09:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:43.091 07:09:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:43.091 ************************************ 00:21:43.091 END TEST nvmf_bdevio_no_huge 00:21:43.091 ************************************ 00:21:43.091 07:09:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:43.091 07:09:11 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:43.091 07:09:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:43.091 07:09:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.091 07:09:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:43.091 ************************************ 00:21:43.091 START TEST nvmf_tls 00:21:43.091 ************************************ 00:21:43.091 07:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:43.091 * Looking for test storage... 00:21:43.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.092 07:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:44.990 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:44.990 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:44.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:44.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.990 07:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:21:44.990 00:21:44.990 --- 10.0.0.2 ping statistics --- 00:21:44.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.990 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:44.990 00:21:44.990 --- 10.0.0.1 ping statistics --- 00:21:44.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.990 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1550198 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1550198 00:21:44.990 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1550198 ']' 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.991 [2024-07-13 07:09:14.135998] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:44.991 [2024-07-13 07:09:14.136080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.991 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.991 [2024-07-13 07:09:14.175874] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:44.991 [2024-07-13 07:09:14.203199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.991 [2024-07-13 07:09:14.288349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.991 [2024-07-13 07:09:14.288392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.991 [2024-07-13 07:09:14.288415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.991 [2024-07-13 07:09:14.288426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.991 [2024-07-13 07:09:14.288435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.991 [2024-07-13 07:09:14.288465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:44.991 07:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:45.248 true 00:21:45.248 07:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:45.248 07:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:45.506 07:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:45.506 07:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:45.506 07:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:45.764 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:45.764 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:46.022 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:46.022 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:46.022 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:46.280 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:46.280 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:46.538 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:46.538 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:46.538 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:46.538 07:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:46.796 07:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:46.796 07:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:46.796 07:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:47.053 07:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:47.053 07:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:47.311 07:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:47.311 07:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:47.311 07:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:47.568 07:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:47.569 07:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:47.826 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:47.826 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.wqJMhAQy9J 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Eabve2e4HJ 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.wqJMhAQy9J 00:21:47.827 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Eabve2e4HJ 00:21:48.085 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:48.085 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:48.651 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.wqJMhAQy9J 00:21:48.651 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wqJMhAQy9J 00:21:48.651 07:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:48.910 [2024-07-13 07:09:18.151013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.910 07:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:49.168 07:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:49.426 [2024-07-13 07:09:18.644325] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.426 [2024-07-13 07:09:18.644546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.426 07:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:49.685 malloc0 00:21:49.685 07:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:49.944 07:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wqJMhAQy9J 00:21:50.202 [2024-07-13 07:09:19.470420] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:50.202 07:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.wqJMhAQy9J 00:21:50.202 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.181 Initializing NVMe Controllers 00:22:00.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:00.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:00.181 Initialization complete. Launching workers. 00:22:00.181 ======================================================== 00:22:00.181 Latency(us) 00:22:00.181 Device Information : IOPS MiB/s Average min max 00:22:00.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7692.46 30.05 8322.19 1433.72 9939.27 00:22:00.181 ======================================================== 00:22:00.181 Total : 7692.46 30.05 8322.19 1433.72 9939.27 00:22:00.181 00:22:00.181 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wqJMhAQy9J 00:22:00.181 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wqJMhAQy9J' 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1552036 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1552036 /var/tmp/bdevperf.sock 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1552036 ']' 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.182 07:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.484 [2024-07-13 07:09:29.637108] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:00.485 [2024-07-13 07:09:29.637211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552036 ] 00:22:00.485 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.485 [2024-07-13 07:09:29.668633] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:00.485 [2024-07-13 07:09:29.696287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.485 [2024-07-13 07:09:29.782038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.485 07:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.485 07:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:00.485 07:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wqJMhAQy9J 00:22:00.742 [2024-07-13 07:09:30.128592] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.742 [2024-07-13 07:09:30.128724] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:00.999 TLSTESTn1 00:22:00.999 07:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:00.999 Running I/O for 10 seconds... 00:22:10.963 00:22:10.963 Latency(us) 00:22:10.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.963 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:10.963 Verification LBA range: start 0x0 length 0x2000 00:22:10.963 TLSTESTn1 : 10.04 3213.31 12.55 0.00 0.00 39738.91 11505.21 59030.95 00:22:10.963 =================================================================================================================== 00:22:10.963 Total : 3213.31 12.55 0.00 0.00 39738.91 11505.21 59030.95 00:22:10.963 0 00:22:10.963 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:10.963 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1552036 00:22:10.963 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1552036 ']' 00:22:10.963 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1552036 00:22:10.963 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:10.963 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.963 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1552036 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1552036' 00:22:11.221 killing process with pid 1552036 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1552036 00:22:11.221 Received shutdown signal, test time was about 10.000000 seconds 00:22:11.221 00:22:11.221 Latency(us) 00:22:11.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.221 =================================================================================================================== 00:22:11.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.221 [2024-07-13 07:09:40.438688] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1552036 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eabve2e4HJ 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eabve2e4HJ 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eabve2e4HJ 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Eabve2e4HJ' 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1553352 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1553352 /var/tmp/bdevperf.sock 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1553352 ']' 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.221 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.479 [2024-07-13 07:09:40.713524] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:11.479 [2024-07-13 07:09:40.713615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553352 ] 00:22:11.479 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.479 [2024-07-13 07:09:40.745600] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:11.479 [2024-07-13 07:09:40.773410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.479 [2024-07-13 07:09:40.858683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.737 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.737 07:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:11.737 07:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Eabve2e4HJ 00:22:11.995 [2024-07-13 07:09:41.238394] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.995 [2024-07-13 07:09:41.238510] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:11.995 [2024-07-13 07:09:41.248077] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:11.995 [2024-07-13 07:09:41.248362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110b8d0 (107): Transport endpoint is not connected 00:22:11.995 [2024-07-13 07:09:41.249353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110b8d0 (9): Bad file descriptor 00:22:11.995 [2024-07-13 07:09:41.250351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:11.995 [2024-07-13 07:09:41.250372] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:11.995 [2024-07-13 07:09:41.250389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:11.995 request: 00:22:11.995 { 00:22:11.995 "name": "TLSTEST", 00:22:11.995 "trtype": "tcp", 00:22:11.995 "traddr": "10.0.0.2", 00:22:11.995 "adrfam": "ipv4", 00:22:11.995 "trsvcid": "4420", 00:22:11.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:11.995 "prchk_reftag": false, 00:22:11.995 "prchk_guard": false, 00:22:11.995 "hdgst": false, 00:22:11.995 "ddgst": false, 00:22:11.995 "psk": "/tmp/tmp.Eabve2e4HJ", 00:22:11.995 "method": "bdev_nvme_attach_controller", 00:22:11.995 "req_id": 1 00:22:11.995 } 00:22:11.995 Got JSON-RPC error response 00:22:11.995 response: 00:22:11.995 { 00:22:11.995 "code": -5, 00:22:11.995 "message": "Input/output error" 00:22:11.995 } 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1553352 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1553352 ']' 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1553352 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1553352 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1553352' 00:22:11.995 killing process with pid 1553352 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1553352 00:22:11.995 Received shutdown signal, test time was about 10.000000 seconds 00:22:11.995 00:22:11.995 Latency(us) 00:22:11.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.995 =================================================================================================================== 00:22:11.995 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:11.995 [2024-07-13 07:09:41.302863] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:11.995 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1553352 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wqJMhAQy9J 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wqJMhAQy9J 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wqJMhAQy9J 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wqJMhAQy9J' 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1553403 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1553403 /var/tmp/bdevperf.sock 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1553403 ']' 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.254 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.254 [2024-07-13 07:09:41.556510] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:12.254 [2024-07-13 07:09:41.556606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553403 ] 00:22:12.254 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.254 [2024-07-13 07:09:41.589484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:12.254 [2024-07-13 07:09:41.616992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.254 [2024-07-13 07:09:41.701969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.513 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.513 07:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:12.513 07:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.wqJMhAQy9J 00:22:12.770 [2024-07-13 07:09:42.078575] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.770 [2024-07-13 07:09:42.078691] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:12.770 [2024-07-13 07:09:42.090099] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:12.770 [2024-07-13 07:09:42.090132] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:12.770 [2024-07-13 07:09:42.090198] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:12.770 [2024-07-13 07:09:42.090655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f48d0 (107): Transport endpoint is not connected 00:22:12.770 [2024-07-13 07:09:42.091646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f48d0 (9): Bad file descriptor 00:22:12.770 [2024-07-13 07:09:42.092644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:12.770 [2024-07-13 07:09:42.092665] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:12.770 [2024-07-13 07:09:42.092681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.770 request: 00:22:12.770 { 00:22:12.770 "name": "TLSTEST", 00:22:12.770 "trtype": "tcp", 00:22:12.770 "traddr": "10.0.0.2", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.770 "prchk_reftag": false, 00:22:12.770 "prchk_guard": false, 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false, 00:22:12.770 "psk": "/tmp/tmp.wqJMhAQy9J", 00:22:12.770 "method": "bdev_nvme_attach_controller", 00:22:12.770 "req_id": 1 00:22:12.771 } 00:22:12.771 Got JSON-RPC error response 00:22:12.771 response: 00:22:12.771 { 00:22:12.771 "code": -5, 00:22:12.771 "message": "Input/output error" 00:22:12.771 } 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1553403 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1553403 ']' 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1553403 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1553403 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1553403' 00:22:12.771 killing process with pid 1553403 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1553403 00:22:12.771 Received shutdown signal, test time was about 10.000000 seconds 00:22:12.771 00:22:12.771 Latency(us) 00:22:12.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.771 =================================================================================================================== 00:22:12.771 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:12.771 [2024-07-13 07:09:42.143447] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:12.771 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1553403 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wqJMhAQy9J 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wqJMhAQy9J 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wqJMhAQy9J 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wqJMhAQy9J' 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1553503 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1553503 /var/tmp/bdevperf.sock 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1553503 ']' 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.029 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.029 [2024-07-13 07:09:42.408676] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:13.029 [2024-07-13 07:09:42.408770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553503 ] 00:22:13.029 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.029 [2024-07-13 07:09:42.441425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:13.029 [2024-07-13 07:09:42.469601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.287 [2024-07-13 07:09:42.556131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.287 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.287 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:13.287 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wqJMhAQy9J 00:22:13.543 [2024-07-13 07:09:42.887398] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.543 [2024-07-13 07:09:42.887523] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:13.543 [2024-07-13 07:09:42.892670] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:13.543 [2024-07-13 07:09:42.892709] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:13.543 [2024-07-13 07:09:42.892759] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:13.543 [2024-07-13 07:09:42.893339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212e8d0 (107): Transport endpoint is not connected 00:22:13.543 [2024-07-13 07:09:42.894326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212e8d0 (9): Bad file descriptor 00:22:13.543 [2024-07-13 07:09:42.895324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:13.543 [2024-07-13 07:09:42.895345] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:13.543 [2024-07-13 07:09:42.895362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:13.543 request: 00:22:13.543 { 00:22:13.543 "name": "TLSTEST", 00:22:13.543 "trtype": "tcp", 00:22:13.543 "traddr": "10.0.0.2", 00:22:13.543 "adrfam": "ipv4", 00:22:13.543 "trsvcid": "4420", 00:22:13.543 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:13.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:13.543 "prchk_reftag": false, 00:22:13.543 "prchk_guard": false, 00:22:13.543 "hdgst": false, 00:22:13.543 "ddgst": false, 00:22:13.543 "psk": "/tmp/tmp.wqJMhAQy9J", 00:22:13.543 "method": "bdev_nvme_attach_controller", 00:22:13.543 "req_id": 1 00:22:13.543 } 00:22:13.543 Got JSON-RPC error response 00:22:13.543 response: 00:22:13.543 { 00:22:13.543 "code": -5, 00:22:13.543 "message": "Input/output error" 00:22:13.543 } 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1553503 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1553503 ']' 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1553503 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1553503 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1553503' 00:22:13.543 killing process with pid 1553503 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1553503 00:22:13.543 Received shutdown signal, test time was about 10.000000 seconds 00:22:13.543 00:22:13.543 Latency(us) 00:22:13.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.543 =================================================================================================================== 00:22:13.543 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:13.543 [2024-07-13 07:09:42.948313] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:13.543 07:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1553503 00:22:13.800 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1553640 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1553640 /var/tmp/bdevperf.sock 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1553640 ']' 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.801 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.801 [2024-07-13 07:09:43.216694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:13.801 [2024-07-13 07:09:43.216774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553640 ] 00:22:13.801 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.801 [2024-07-13 07:09:43.249083] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:14.057 [2024-07-13 07:09:43.276275] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.057 [2024-07-13 07:09:43.357729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.057 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.057 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:14.057 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:14.316 [2024-07-13 07:09:43.724483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:14.316 [2024-07-13 07:09:43.726323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee2de0 (9): Bad file descriptor 00:22:14.316 [2024-07-13 07:09:43.727319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.316 [2024-07-13 07:09:43.727340] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:14.316 [2024-07-13 07:09:43.727356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.316 request: 00:22:14.316 { 00:22:14.316 "name": "TLSTEST", 00:22:14.316 "trtype": "tcp", 00:22:14.316 "traddr": "10.0.0.2", 00:22:14.316 "adrfam": "ipv4", 00:22:14.316 "trsvcid": "4420", 00:22:14.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.316 "prchk_reftag": false, 00:22:14.316 "prchk_guard": false, 00:22:14.316 "hdgst": false, 00:22:14.316 "ddgst": false, 00:22:14.316 "method": "bdev_nvme_attach_controller", 00:22:14.316 "req_id": 1 00:22:14.316 } 00:22:14.316 Got JSON-RPC error response 00:22:14.316 response: 00:22:14.316 { 00:22:14.316 "code": -5, 00:22:14.316 "message": "Input/output error" 00:22:14.316 } 00:22:14.316 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1553640 00:22:14.316 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1553640 ']' 00:22:14.316 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1553640 00:22:14.316 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:14.316 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.316 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1553640 00:22:14.574 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:14.574 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:14.574 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1553640' 00:22:14.574 killing process with pid 1553640 00:22:14.574 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1553640 00:22:14.574 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.574 00:22:14.574 Latency(us) 00:22:14.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.574 =================================================================================================================== 00:22:14.574 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1553640 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1550198 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1550198 ']' 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1550198 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.575 07:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1550198 00:22:14.575 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:14.575 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:14.575 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1550198' 00:22:14.575 killing process with pid 1550198 00:22:14.575 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1550198 00:22:14.575 [2024-07-13 07:09:44.012442] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:14.575 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1550198 00:22:14.833 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:14.833 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:14.833 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:14.833 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:14.833 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:14.833 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:14.833 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.hW59mtHl8Q 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.hW59mtHl8Q 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1553789 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1553789 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1553789 ']' 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.092 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.092 [2024-07-13 07:09:44.366990] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:15.092 [2024-07-13 07:09:44.367071] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.092 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.092 [2024-07-13 07:09:44.404711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:15.092 [2024-07-13 07:09:44.430614] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.092 [2024-07-13 07:09:44.514268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.092 [2024-07-13 07:09:44.514328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.092 [2024-07-13 07:09:44.514341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.092 [2024-07-13 07:09:44.514351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.092 [2024-07-13 07:09:44.514361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.092 [2024-07-13 07:09:44.514387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.350 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.350 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:15.350 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.350 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.350 07:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.350 07:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.350 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.hW59mtHl8Q 00:22:15.350 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hW59mtHl8Q 00:22:15.350 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:15.608 [2024-07-13 07:09:44.875178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.608 07:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:15.866 07:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:16.124 [2024-07-13 07:09:45.360446] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:16.124 [2024-07-13 07:09:45.360688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.124 07:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:16.382 malloc0 00:22:16.382 07:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:16.640 07:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hW59mtHl8Q 00:22:16.640 [2024-07-13 07:09:46.090209] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:16.897 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hW59mtHl8Q 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hW59mtHl8Q' 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1554073 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1554073 /var/tmp/bdevperf.sock 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1554073 ']' 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.898 07:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.898 [2024-07-13 07:09:46.157230] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:16.898 [2024-07-13 07:09:46.157319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1554073 ] 00:22:16.898 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.898 [2024-07-13 07:09:46.189512] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.898 [2024-07-13 07:09:46.217158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.898 [2024-07-13 07:09:46.303624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.156 07:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.156 07:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:17.156 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hW59mtHl8Q 00:22:17.414 [2024-07-13 07:09:46.634973] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.414 [2024-07-13 07:09:46.635096] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:17.414 TLSTESTn1 00:22:17.414 07:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:17.414 Running I/O for 10 seconds... 00:22:29.607 00:22:29.607 Latency(us) 00:22:29.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.607 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:29.607 Verification LBA range: start 0x0 length 0x2000 00:22:29.607 TLSTESTn1 : 10.06 2227.06 8.70 0.00 0.00 57315.01 10000.31 79614.10 00:22:29.607 =================================================================================================================== 00:22:29.607 Total : 2227.06 8.70 0.00 0.00 57315.01 10000.31 79614.10 00:22:29.607 0 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1554073 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1554073 ']' 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1554073 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1554073 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1554073' 00:22:29.607 killing process with pid 1554073 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1554073 00:22:29.607 Received shutdown signal, test time was about 10.000000 seconds 00:22:29.607 00:22:29.607 Latency(us) 00:22:29.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.607 =================================================================================================================== 00:22:29.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:29.607 [2024-07-13 07:09:56.962967] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:29.607 07:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1554073 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.hW59mtHl8Q 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hW59mtHl8Q 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hW59mtHl8Q 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hW59mtHl8Q 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hW59mtHl8Q' 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1555265 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1555265 /var/tmp/bdevperf.sock 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1555265 ']' 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.607 [2024-07-13 07:09:57.239668] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:29.607 [2024-07-13 07:09:57.239760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1555265 ] 00:22:29.607 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.607 [2024-07-13 07:09:57.272091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:29.607 [2024-07-13 07:09:57.298875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.607 [2024-07-13 07:09:57.383861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hW59mtHl8Q 00:22:29.607 [2024-07-13 07:09:57.725994] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.607 [2024-07-13 07:09:57.726072] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:29.607 [2024-07-13 07:09:57.726087] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.hW59mtHl8Q 00:22:29.607 request: 00:22:29.607 { 00:22:29.607 "name": "TLSTEST", 00:22:29.607 "trtype": "tcp", 00:22:29.607 "traddr": "10.0.0.2", 00:22:29.607 "adrfam": "ipv4", 00:22:29.607 "trsvcid": "4420", 00:22:29.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.607 "prchk_reftag": false, 00:22:29.607 "prchk_guard": false, 00:22:29.607 "hdgst": false, 00:22:29.607 "ddgst": false, 00:22:29.607 "psk": "/tmp/tmp.hW59mtHl8Q", 00:22:29.607 "method": "bdev_nvme_attach_controller", 00:22:29.607 "req_id": 1 00:22:29.607 } 00:22:29.607 Got JSON-RPC error response 00:22:29.607 response: 00:22:29.607 { 00:22:29.607 "code": -1, 00:22:29.607 "message": "Operation not permitted" 00:22:29.607 } 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1555265 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1555265 ']' 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1555265 00:22:29.607 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1555265 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1555265' 00:22:29.608 killing process with pid 1555265 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1555265 00:22:29.608 Received shutdown signal, test time was about 10.000000 seconds 00:22:29.608 00:22:29.608 Latency(us) 00:22:29.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.608 =================================================================================================================== 00:22:29.608 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1555265 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1553789 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1553789 ']' 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1553789 00:22:29.608 07:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1553789 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1553789' 00:22:29.608 killing process with pid 1553789 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1553789 00:22:29.608 [2024-07-13 07:09:58.026096] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1553789 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1555410 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1555410 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1555410 ']' 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.608 [2024-07-13 07:09:58.326094] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:29.608 [2024-07-13 07:09:58.326201] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.608 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.608 [2024-07-13 07:09:58.362773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:29.608 [2024-07-13 07:09:58.394654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.608 [2024-07-13 07:09:58.481301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.608 [2024-07-13 07:09:58.481367] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.608 [2024-07-13 07:09:58.481393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.608 [2024-07-13 07:09:58.481407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.608 [2024-07-13 07:09:58.481418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.608 [2024-07-13 07:09:58.481456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.hW59mtHl8Q 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.hW59mtHl8Q 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.hW59mtHl8Q 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hW59mtHl8Q 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:29.608 [2024-07-13 07:09:58.856630] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.608 07:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:29.866 07:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.124 [2024-07-13 07:09:59.370047] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.124 [2024-07-13 07:09:59.370306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.124 07:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:30.383 malloc0 00:22:30.383 07:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.642 07:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hW59mtHl8Q 00:22:30.899 [2024-07-13 07:10:00.184252] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:30.899 [2024-07-13 07:10:00.184313] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:30.899 [2024-07-13 07:10:00.184353] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:30.899 request: 00:22:30.899 { 00:22:30.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.899 "host": "nqn.2016-06.io.spdk:host1", 00:22:30.899 "psk": "/tmp/tmp.hW59mtHl8Q", 00:22:30.899 "method": "nvmf_subsystem_add_host", 00:22:30.899 "req_id": 1 00:22:30.899 } 00:22:30.899 Got JSON-RPC error response 00:22:30.899 response: 00:22:30.899 { 00:22:30.899 "code": -32603, 00:22:30.899 "message": "Internal error" 00:22:30.899 } 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1555410 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1555410 ']' 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1555410 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1555410 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1555410' 00:22:30.899 killing process with pid 1555410 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1555410 00:22:30.899 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1555410 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.hW59mtHl8Q 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1555703 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1555703 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1555703 ']' 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.159 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.159 [2024-07-13 07:10:00.540313] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:31.159 [2024-07-13 07:10:00.540406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.159 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.159 [2024-07-13 07:10:00.587744] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:31.416 [2024-07-13 07:10:00.618795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.416 [2024-07-13 07:10:00.712426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.416 [2024-07-13 07:10:00.712503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.416 [2024-07-13 07:10:00.712520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.416 [2024-07-13 07:10:00.712534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.416 [2024-07-13 07:10:00.712546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.416 [2024-07-13 07:10:00.712577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.416 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.416 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:31.416 07:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.416 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:31.416 07:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.416 07:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.416 07:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.hW59mtHl8Q 00:22:31.416 07:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hW59mtHl8Q 00:22:31.416 07:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:31.674 [2024-07-13 07:10:01.088557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.674 07:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:31.931 07:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:32.188 [2024-07-13 07:10:01.622019] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.188 [2024-07-13 07:10:01.622267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.188 07:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:32.750 malloc0 00:22:32.750 07:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:32.750 07:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hW59mtHl8Q 00:22:33.006 [2024-07-13 07:10:02.428225] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:33.006 07:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1555985 00:22:33.006 07:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:33.006 07:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:33.006 07:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1555985 /var/tmp/bdevperf.sock 00:22:33.006 07:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1555985 ']' 00:22:33.006 07:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.006 07:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.006 07:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.006 07:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.006 07:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.263 [2024-07-13 07:10:02.484549] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:33.263 [2024-07-13 07:10:02.484615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1555985 ] 00:22:33.263 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.263 [2024-07-13 07:10:02.516469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:33.263 [2024-07-13 07:10:02.542219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.263 [2024-07-13 07:10:02.626060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.519 07:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.519 07:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:33.519 07:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hW59mtHl8Q 00:22:33.776 [2024-07-13 07:10:02.987225] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:33.776 [2024-07-13 07:10:02.987334] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:33.776 TLSTESTn1 00:22:33.776 07:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:34.032 07:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:34.032 "subsystems": [ 00:22:34.032 { 00:22:34.032 "subsystem": "keyring", 00:22:34.032 "config": [] 00:22:34.032 }, 00:22:34.032 { 00:22:34.032 "subsystem": "iobuf", 00:22:34.032 "config": [ 00:22:34.033 { 00:22:34.033 "method": "iobuf_set_options", 00:22:34.033 "params": { 00:22:34.033 "small_pool_count": 8192, 00:22:34.033 "large_pool_count": 1024, 00:22:34.033 "small_bufsize": 8192, 00:22:34.033 "large_bufsize": 135168 00:22:34.033 } 00:22:34.033 } 00:22:34.033 ] 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "subsystem": "sock", 00:22:34.033 "config": [ 00:22:34.033 { 00:22:34.033 "method": "sock_set_default_impl", 00:22:34.033 "params": { 00:22:34.033 "impl_name": "posix" 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "sock_impl_set_options", 00:22:34.033 "params": { 00:22:34.033 "impl_name": "ssl", 00:22:34.033 "recv_buf_size": 4096, 00:22:34.033 "send_buf_size": 4096, 00:22:34.033 "enable_recv_pipe": true, 00:22:34.033 "enable_quickack": false, 00:22:34.033 "enable_placement_id": 0, 00:22:34.033 "enable_zerocopy_send_server": true, 00:22:34.033 "enable_zerocopy_send_client": false, 00:22:34.033 "zerocopy_threshold": 0, 00:22:34.033 "tls_version": 0, 00:22:34.033 "enable_ktls": false 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "sock_impl_set_options", 00:22:34.033 "params": { 00:22:34.033 "impl_name": "posix", 00:22:34.033 "recv_buf_size": 2097152, 00:22:34.033 "send_buf_size": 2097152, 00:22:34.033 "enable_recv_pipe": true, 00:22:34.033 "enable_quickack": false, 00:22:34.033 "enable_placement_id": 0, 00:22:34.033 "enable_zerocopy_send_server": true, 00:22:34.033 "enable_zerocopy_send_client": false, 00:22:34.033 "zerocopy_threshold": 0, 00:22:34.033 "tls_version": 0, 00:22:34.033 "enable_ktls": false 00:22:34.033 } 00:22:34.033 } 00:22:34.033 ] 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "subsystem": "vmd", 00:22:34.033 "config": [] 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "subsystem": "accel", 00:22:34.033 "config": [ 00:22:34.033 { 00:22:34.033 "method": "accel_set_options", 00:22:34.033 "params": { 00:22:34.033 "small_cache_size": 128, 00:22:34.033 "large_cache_size": 16, 00:22:34.033 "task_count": 2048, 00:22:34.033 "sequence_count": 2048, 00:22:34.033 "buf_count": 2048 00:22:34.033 } 00:22:34.033 } 00:22:34.033 ] 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "subsystem": "bdev", 00:22:34.033 "config": [ 00:22:34.033 { 00:22:34.033 "method": "bdev_set_options", 00:22:34.033 "params": { 00:22:34.033 "bdev_io_pool_size": 65535, 00:22:34.033 "bdev_io_cache_size": 256, 00:22:34.033 "bdev_auto_examine": true, 00:22:34.033 "iobuf_small_cache_size": 128, 00:22:34.033 "iobuf_large_cache_size": 16 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "bdev_raid_set_options", 00:22:34.033 "params": { 00:22:34.033 "process_window_size_kb": 1024 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "bdev_iscsi_set_options", 00:22:34.033 "params": { 00:22:34.033 "timeout_sec": 30 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "bdev_nvme_set_options", 00:22:34.033 "params": { 00:22:34.033 "action_on_timeout": "none", 00:22:34.033 "timeout_us": 0, 00:22:34.033 "timeout_admin_us": 0, 00:22:34.033 "keep_alive_timeout_ms": 10000, 00:22:34.033 "arbitration_burst": 0, 00:22:34.033 "low_priority_weight": 0, 00:22:34.033 "medium_priority_weight": 0, 00:22:34.033 "high_priority_weight": 0, 00:22:34.033 "nvme_adminq_poll_period_us": 10000, 00:22:34.033 "nvme_ioq_poll_period_us": 0, 00:22:34.033 "io_queue_requests": 0, 00:22:34.033 "delay_cmd_submit": true, 00:22:34.033 "transport_retry_count": 4, 00:22:34.033 "bdev_retry_count": 3, 00:22:34.033 "transport_ack_timeout": 0, 00:22:34.033 "ctrlr_loss_timeout_sec": 0, 00:22:34.033 "reconnect_delay_sec": 0, 00:22:34.033 "fast_io_fail_timeout_sec": 0, 00:22:34.033 "disable_auto_failback": false, 00:22:34.033 "generate_uuids": false, 00:22:34.033 "transport_tos": 0, 00:22:34.033 "nvme_error_stat": false, 00:22:34.033 "rdma_srq_size": 0, 00:22:34.033 "io_path_stat": false, 00:22:34.033 "allow_accel_sequence": false, 00:22:34.033 "rdma_max_cq_size": 0, 00:22:34.033 "rdma_cm_event_timeout_ms": 0, 00:22:34.033 "dhchap_digests": [ 00:22:34.033 "sha256", 00:22:34.033 "sha384", 00:22:34.033 "sha512" 00:22:34.033 ], 00:22:34.033 "dhchap_dhgroups": [ 00:22:34.033 "null", 00:22:34.033 "ffdhe2048", 00:22:34.033 "ffdhe3072", 00:22:34.033 "ffdhe4096", 00:22:34.033 "ffdhe6144", 00:22:34.033 "ffdhe8192" 00:22:34.033 ] 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "bdev_nvme_set_hotplug", 00:22:34.033 "params": { 00:22:34.033 "period_us": 100000, 00:22:34.033 "enable": false 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "bdev_malloc_create", 00:22:34.033 "params": { 00:22:34.033 "name": "malloc0", 00:22:34.033 "num_blocks": 8192, 00:22:34.033 "block_size": 4096, 00:22:34.033 "physical_block_size": 4096, 00:22:34.033 "uuid": "fdf04313-acfa-4a4c-90af-1c9b3c3bd2d9", 00:22:34.033 "optimal_io_boundary": 0 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "bdev_wait_for_examine" 00:22:34.033 } 00:22:34.033 ] 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "subsystem": "nbd", 00:22:34.033 "config": [] 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "subsystem": "scheduler", 00:22:34.033 "config": [ 00:22:34.033 { 00:22:34.033 "method": "framework_set_scheduler", 00:22:34.033 "params": { 00:22:34.033 "name": "static" 00:22:34.033 } 00:22:34.033 } 00:22:34.033 ] 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "subsystem": "nvmf", 00:22:34.033 "config": [ 00:22:34.033 { 00:22:34.033 "method": "nvmf_set_config", 00:22:34.033 "params": { 00:22:34.033 "discovery_filter": "match_any", 00:22:34.033 "admin_cmd_passthru": { 00:22:34.033 "identify_ctrlr": false 00:22:34.033 } 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "nvmf_set_max_subsystems", 00:22:34.033 "params": { 00:22:34.033 "max_subsystems": 1024 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "nvmf_set_crdt", 00:22:34.033 "params": { 00:22:34.033 "crdt1": 0, 00:22:34.033 "crdt2": 0, 00:22:34.033 "crdt3": 0 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "nvmf_create_transport", 00:22:34.033 "params": { 00:22:34.033 "trtype": "TCP", 00:22:34.033 "max_queue_depth": 128, 00:22:34.033 "max_io_qpairs_per_ctrlr": 127, 00:22:34.033 "in_capsule_data_size": 4096, 00:22:34.033 "max_io_size": 131072, 00:22:34.033 "io_unit_size": 131072, 00:22:34.033 "max_aq_depth": 128, 00:22:34.033 "num_shared_buffers": 511, 00:22:34.033 "buf_cache_size": 4294967295, 00:22:34.033 "dif_insert_or_strip": false, 00:22:34.033 "zcopy": false, 00:22:34.033 "c2h_success": false, 00:22:34.033 "sock_priority": 0, 00:22:34.033 "abort_timeout_sec": 1, 00:22:34.033 "ack_timeout": 0, 00:22:34.033 "data_wr_pool_size": 0 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "nvmf_create_subsystem", 00:22:34.033 "params": { 00:22:34.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.033 "allow_any_host": false, 00:22:34.033 "serial_number": "SPDK00000000000001", 00:22:34.033 "model_number": "SPDK bdev Controller", 00:22:34.033 "max_namespaces": 10, 00:22:34.033 "min_cntlid": 1, 00:22:34.033 "max_cntlid": 65519, 00:22:34.033 "ana_reporting": false 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "nvmf_subsystem_add_host", 00:22:34.033 "params": { 00:22:34.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.033 "host": "nqn.2016-06.io.spdk:host1", 00:22:34.033 "psk": "/tmp/tmp.hW59mtHl8Q" 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "nvmf_subsystem_add_ns", 00:22:34.033 "params": { 00:22:34.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.033 "namespace": { 00:22:34.033 "nsid": 1, 00:22:34.033 "bdev_name": "malloc0", 00:22:34.033 "nguid": "FDF04313ACFA4A4C90AF1C9B3C3BD2D9", 00:22:34.033 "uuid": "fdf04313-acfa-4a4c-90af-1c9b3c3bd2d9", 00:22:34.033 "no_auto_visible": false 00:22:34.033 } 00:22:34.033 } 00:22:34.033 }, 00:22:34.033 { 00:22:34.033 "method": "nvmf_subsystem_add_listener", 00:22:34.033 "params": { 00:22:34.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.033 "listen_address": { 00:22:34.033 "trtype": "TCP", 00:22:34.033 "adrfam": "IPv4", 00:22:34.033 "traddr": "10.0.0.2", 00:22:34.033 "trsvcid": "4420" 00:22:34.033 }, 00:22:34.033 "secure_channel": true 00:22:34.033 } 00:22:34.033 } 00:22:34.033 ] 00:22:34.033 } 00:22:34.033 ] 00:22:34.033 }' 00:22:34.033 07:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:34.597 07:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:34.597 "subsystems": [ 00:22:34.597 { 00:22:34.597 "subsystem": "keyring", 00:22:34.597 "config": [] 00:22:34.597 }, 00:22:34.597 { 00:22:34.597 "subsystem": "iobuf", 00:22:34.597 "config": [ 00:22:34.597 { 00:22:34.597 "method": "iobuf_set_options", 00:22:34.597 "params": { 00:22:34.597 "small_pool_count": 8192, 00:22:34.597 "large_pool_count": 1024, 00:22:34.597 "small_bufsize": 8192, 00:22:34.597 "large_bufsize": 135168 00:22:34.597 } 00:22:34.597 } 00:22:34.597 ] 00:22:34.597 }, 00:22:34.597 { 00:22:34.597 "subsystem": "sock", 00:22:34.597 "config": [ 00:22:34.597 { 00:22:34.597 "method": "sock_set_default_impl", 00:22:34.597 "params": { 00:22:34.597 "impl_name": "posix" 00:22:34.597 } 00:22:34.597 }, 00:22:34.597 { 00:22:34.597 "method": "sock_impl_set_options", 00:22:34.597 "params": { 00:22:34.597 "impl_name": "ssl", 00:22:34.597 "recv_buf_size": 4096, 00:22:34.597 "send_buf_size": 4096, 00:22:34.597 "enable_recv_pipe": true, 00:22:34.597 "enable_quickack": false, 00:22:34.597 "enable_placement_id": 0, 00:22:34.597 "enable_zerocopy_send_server": true, 00:22:34.597 "enable_zerocopy_send_client": false, 00:22:34.597 "zerocopy_threshold": 0, 00:22:34.597 "tls_version": 0, 00:22:34.597 "enable_ktls": false 00:22:34.597 } 00:22:34.597 }, 00:22:34.597 { 00:22:34.597 "method": "sock_impl_set_options", 00:22:34.597 "params": { 00:22:34.597 "impl_name": "posix", 00:22:34.597 "recv_buf_size": 2097152, 00:22:34.597 "send_buf_size": 2097152, 00:22:34.597 "enable_recv_pipe": true, 00:22:34.597 "enable_quickack": false, 00:22:34.597 "enable_placement_id": 0, 00:22:34.597 "enable_zerocopy_send_server": true, 00:22:34.597 "enable_zerocopy_send_client": false, 00:22:34.597 "zerocopy_threshold": 0, 00:22:34.597 "tls_version": 0, 00:22:34.597 "enable_ktls": false 00:22:34.597 } 00:22:34.597 } 00:22:34.597 ] 00:22:34.597 }, 00:22:34.597 { 00:22:34.597 "subsystem": "vmd", 00:22:34.597 "config": [] 00:22:34.597 }, 00:22:34.597 { 00:22:34.597 "subsystem": "accel", 00:22:34.597 "config": [ 00:22:34.598 { 00:22:34.598 "method": "accel_set_options", 00:22:34.598 "params": { 00:22:34.598 "small_cache_size": 128, 00:22:34.598 "large_cache_size": 16, 00:22:34.598 "task_count": 2048, 00:22:34.598 "sequence_count": 2048, 00:22:34.598 "buf_count": 2048 00:22:34.598 } 00:22:34.598 } 00:22:34.598 ] 00:22:34.598 }, 00:22:34.598 { 00:22:34.598 "subsystem": "bdev", 00:22:34.598 "config": [ 00:22:34.598 { 00:22:34.598 "method": "bdev_set_options", 00:22:34.598 "params": { 00:22:34.598 "bdev_io_pool_size": 65535, 00:22:34.598 "bdev_io_cache_size": 256, 00:22:34.598 "bdev_auto_examine": true, 00:22:34.598 "iobuf_small_cache_size": 128, 00:22:34.598 "iobuf_large_cache_size": 16 00:22:34.598 } 00:22:34.598 }, 00:22:34.598 { 00:22:34.598 "method": "bdev_raid_set_options", 00:22:34.598 "params": { 00:22:34.598 "process_window_size_kb": 1024 00:22:34.598 } 00:22:34.598 }, 00:22:34.598 { 00:22:34.598 "method": "bdev_iscsi_set_options", 00:22:34.598 "params": { 00:22:34.598 "timeout_sec": 30 00:22:34.598 } 00:22:34.598 }, 00:22:34.598 { 00:22:34.598 "method": "bdev_nvme_set_options", 00:22:34.598 "params": { 00:22:34.598 "action_on_timeout": "none", 00:22:34.598 "timeout_us": 0, 00:22:34.598 "timeout_admin_us": 0, 00:22:34.598 "keep_alive_timeout_ms": 10000, 00:22:34.598 "arbitration_burst": 0, 00:22:34.598 "low_priority_weight": 0, 00:22:34.598 "medium_priority_weight": 0, 00:22:34.598 "high_priority_weight": 0, 00:22:34.598 "nvme_adminq_poll_period_us": 10000, 00:22:34.598 "nvme_ioq_poll_period_us": 0, 00:22:34.598 "io_queue_requests": 512, 00:22:34.598 "delay_cmd_submit": true, 00:22:34.598 "transport_retry_count": 4, 00:22:34.598 "bdev_retry_count": 3, 00:22:34.598 "transport_ack_timeout": 0, 00:22:34.598 "ctrlr_loss_timeout_sec": 0, 00:22:34.598 "reconnect_delay_sec": 0, 00:22:34.598 "fast_io_fail_timeout_sec": 0, 00:22:34.598 "disable_auto_failback": false, 00:22:34.598 "generate_uuids": false, 00:22:34.598 "transport_tos": 0, 00:22:34.598 "nvme_error_stat": false, 00:22:34.598 "rdma_srq_size": 0, 00:22:34.598 "io_path_stat": false, 00:22:34.598 "allow_accel_sequence": false, 00:22:34.598 "rdma_max_cq_size": 0, 00:22:34.598 "rdma_cm_event_timeout_ms": 0, 00:22:34.598 "dhchap_digests": [ 00:22:34.598 "sha256", 00:22:34.598 "sha384", 00:22:34.598 "sha512" 00:22:34.598 ], 00:22:34.598 "dhchap_dhgroups": [ 00:22:34.598 "null", 00:22:34.598 "ffdhe2048", 00:22:34.598 "ffdhe3072", 00:22:34.598 "ffdhe4096", 00:22:34.598 "ffdhe6144", 00:22:34.598 "ffdhe8192" 00:22:34.598 ] 00:22:34.598 } 00:22:34.598 }, 00:22:34.598 { 00:22:34.598 "method": "bdev_nvme_attach_controller", 00:22:34.598 "params": { 00:22:34.598 "name": "TLSTEST", 00:22:34.598 "trtype": "TCP", 00:22:34.598 "adrfam": "IPv4", 00:22:34.598 "traddr": "10.0.0.2", 00:22:34.598 "trsvcid": "4420", 00:22:34.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.598 "prchk_reftag": false, 00:22:34.598 "prchk_guard": false, 00:22:34.598 "ctrlr_loss_timeout_sec": 0, 00:22:34.598 "reconnect_delay_sec": 0, 00:22:34.598 "fast_io_fail_timeout_sec": 0, 00:22:34.598 "psk": "/tmp/tmp.hW59mtHl8Q", 00:22:34.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.598 "hdgst": false, 00:22:34.598 "ddgst": false 00:22:34.598 } 00:22:34.598 }, 00:22:34.598 { 00:22:34.598 "method": "bdev_nvme_set_hotplug", 00:22:34.598 "params": { 00:22:34.598 "period_us": 100000, 00:22:34.598 "enable": false 00:22:34.598 } 00:22:34.598 }, 00:22:34.598 { 00:22:34.598 "method": "bdev_wait_for_examine" 00:22:34.598 } 00:22:34.598 ] 00:22:34.598 }, 00:22:34.598 { 00:22:34.598 "subsystem": "nbd", 00:22:34.598 "config": [] 00:22:34.598 } 00:22:34.598 ] 00:22:34.598 }' 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1555985 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1555985 ']' 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1555985 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1555985 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1555985' 00:22:34.598 killing process with pid 1555985 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1555985 00:22:34.598 Received shutdown signal, test time was about 10.000000 seconds 00:22:34.598 00:22:34.598 Latency(us) 00:22:34.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.598 =================================================================================================================== 00:22:34.598 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:34.598 [2024-07-13 07:10:03.834251] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:34.598 07:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1555985 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1555703 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1555703 ']' 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1555703 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1555703 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1555703' 00:22:34.856 killing process with pid 1555703 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1555703 00:22:34.856 [2024-07-13 07:10:04.085644] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:34.856 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1555703 00:22:35.114 07:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:35.114 07:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.114 07:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:35.114 "subsystems": [ 00:22:35.114 { 00:22:35.114 "subsystem": "keyring", 00:22:35.114 "config": [] 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "subsystem": "iobuf", 00:22:35.114 "config": [ 00:22:35.114 { 00:22:35.114 "method": "iobuf_set_options", 00:22:35.114 "params": { 00:22:35.114 "small_pool_count": 8192, 00:22:35.114 "large_pool_count": 1024, 00:22:35.114 "small_bufsize": 8192, 00:22:35.114 "large_bufsize": 135168 00:22:35.114 } 00:22:35.114 } 00:22:35.114 ] 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "subsystem": "sock", 00:22:35.114 "config": [ 00:22:35.114 { 00:22:35.114 "method": "sock_set_default_impl", 00:22:35.114 "params": { 00:22:35.114 "impl_name": "posix" 00:22:35.114 } 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "method": "sock_impl_set_options", 00:22:35.114 "params": { 00:22:35.114 "impl_name": "ssl", 00:22:35.114 "recv_buf_size": 4096, 00:22:35.114 "send_buf_size": 4096, 00:22:35.114 "enable_recv_pipe": true, 00:22:35.114 "enable_quickack": false, 00:22:35.114 "enable_placement_id": 0, 00:22:35.114 "enable_zerocopy_send_server": true, 00:22:35.114 "enable_zerocopy_send_client": false, 00:22:35.114 "zerocopy_threshold": 0, 00:22:35.114 "tls_version": 0, 00:22:35.114 "enable_ktls": false 00:22:35.114 } 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "method": "sock_impl_set_options", 00:22:35.114 "params": { 00:22:35.114 "impl_name": "posix", 00:22:35.114 "recv_buf_size": 2097152, 00:22:35.114 "send_buf_size": 2097152, 00:22:35.114 "enable_recv_pipe": true, 00:22:35.114 "enable_quickack": false, 00:22:35.114 "enable_placement_id": 0, 00:22:35.114 "enable_zerocopy_send_server": true, 00:22:35.114 "enable_zerocopy_send_client": false, 00:22:35.114 "zerocopy_threshold": 0, 00:22:35.114 "tls_version": 0, 00:22:35.114 "enable_ktls": false 00:22:35.114 } 00:22:35.114 } 00:22:35.114 ] 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "subsystem": "vmd", 00:22:35.114 "config": [] 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "subsystem": "accel", 00:22:35.114 "config": [ 00:22:35.114 { 00:22:35.114 "method": "accel_set_options", 00:22:35.114 "params": { 00:22:35.114 "small_cache_size": 128, 00:22:35.114 "large_cache_size": 16, 00:22:35.114 "task_count": 2048, 00:22:35.114 "sequence_count": 2048, 00:22:35.114 "buf_count": 2048 00:22:35.114 } 00:22:35.114 } 00:22:35.114 ] 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "subsystem": "bdev", 00:22:35.114 "config": [ 00:22:35.114 { 00:22:35.114 "method": "bdev_set_options", 00:22:35.114 "params": { 00:22:35.114 "bdev_io_pool_size": 65535, 00:22:35.114 "bdev_io_cache_size": 256, 00:22:35.114 "bdev_auto_examine": true, 00:22:35.114 "iobuf_small_cache_size": 128, 00:22:35.114 "iobuf_large_cache_size": 16 00:22:35.114 } 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "method": "bdev_raid_set_options", 00:22:35.114 "params": { 00:22:35.114 "process_window_size_kb": 1024 00:22:35.114 } 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "method": "bdev_iscsi_set_options", 00:22:35.114 "params": { 00:22:35.114 "timeout_sec": 30 00:22:35.114 } 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "method": "bdev_nvme_set_options", 00:22:35.114 "params": { 00:22:35.114 "action_on_timeout": "none", 00:22:35.114 "timeout_us": 0, 00:22:35.114 "timeout_admin_us": 0, 00:22:35.114 "keep_alive_timeout_ms": 10000, 00:22:35.114 "arbitration_burst": 0, 00:22:35.114 "low_priority_weight": 0, 00:22:35.114 "medium_priority_weight": 0, 00:22:35.114 "high_priority_weight": 0, 00:22:35.114 "nvme_adminq_poll_period_us": 10000, 00:22:35.114 "nvme_ioq_poll_period_us": 0, 00:22:35.114 "io_queue_requests": 0, 00:22:35.114 "delay_cmd_submit": true, 00:22:35.114 "transport_retry_count": 4, 00:22:35.114 "bdev_retry_count": 3, 00:22:35.114 "transport_ack_timeout": 0, 00:22:35.114 "ctrlr_loss_timeout_sec": 0, 00:22:35.114 "reconnect_delay_sec": 0, 00:22:35.114 "fast_io_fail_timeout_sec": 0, 00:22:35.114 "disable_auto_failback": false, 00:22:35.114 "generate_uuids": false, 00:22:35.114 "transport_tos": 0, 00:22:35.114 "nvme_error_stat": false, 00:22:35.114 "rdma_srq_size": 0, 00:22:35.114 "io_path_stat": false, 00:22:35.114 "allow_accel_sequence": false, 00:22:35.114 "rdma_max_cq_size": 0, 00:22:35.114 "rdma_cm_event_timeout_ms": 0, 00:22:35.114 "dhchap_digests": [ 00:22:35.114 "sha256", 00:22:35.114 "sha384", 00:22:35.114 "sha512" 00:22:35.114 ], 00:22:35.114 "dhchap_dhgroups": [ 00:22:35.114 "null", 00:22:35.114 "ffdhe2048", 00:22:35.114 "ffdhe3072", 00:22:35.114 "ffdhe4096", 00:22:35.114 "ffdhe 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.114 6144", 00:22:35.114 "ffdhe8192" 00:22:35.114 ] 00:22:35.114 } 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "method": "bdev_nvme_set_hotplug", 00:22:35.114 "params": { 00:22:35.114 "period_us": 100000, 00:22:35.114 "enable": false 00:22:35.114 } 00:22:35.114 }, 00:22:35.114 { 00:22:35.114 "method": "bdev_malloc_create", 00:22:35.114 "params": { 00:22:35.115 "name": "malloc0", 00:22:35.115 "num_blocks": 8192, 00:22:35.115 "block_size": 4096, 00:22:35.115 "physical_block_size": 4096, 00:22:35.115 "uuid": "fdf04313-acfa-4a4c-90af-1c9b3c3bd2d9", 00:22:35.115 "optimal_io_boundary": 0 00:22:35.115 } 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "method": "bdev_wait_for_examine" 00:22:35.115 } 00:22:35.115 ] 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "subsystem": "nbd", 00:22:35.115 "config": [] 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "subsystem": "scheduler", 00:22:35.115 "config": [ 00:22:35.115 { 00:22:35.115 "method": "framework_set_scheduler", 00:22:35.115 "params": { 00:22:35.115 "name": "static" 00:22:35.115 } 00:22:35.115 } 00:22:35.115 ] 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "subsystem": "nvmf", 00:22:35.115 "config": [ 00:22:35.115 { 00:22:35.115 "method": "nvmf_set_config", 00:22:35.115 "params": { 00:22:35.115 "discovery_filter": "match_any", 00:22:35.115 "admin_cmd_passthru": { 00:22:35.115 "identify_ctrlr": false 00:22:35.115 } 00:22:35.115 } 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "method": "nvmf_set_max_subsystems", 00:22:35.115 "params": { 00:22:35.115 "max_subsystems": 1024 00:22:35.115 } 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "method": "nvmf_set_crdt", 00:22:35.115 "params": { 00:22:35.115 "crdt1": 0, 00:22:35.115 "crdt2": 0, 00:22:35.115 "crdt3": 0 00:22:35.115 } 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "method": "nvmf_create_transport", 00:22:35.115 "params": { 00:22:35.115 "trtype": "TCP", 00:22:35.115 "max_queue_depth": 128, 00:22:35.115 "max_io_qpairs_per_ctrlr": 127, 00:22:35.115 "in_capsule_data_size": 4096, 00:22:35.115 "max_io_size": 131072, 00:22:35.115 "io_unit_size": 131072, 00:22:35.115 "max_aq_depth": 128, 00:22:35.115 "num_shared_buffers": 511, 00:22:35.115 "buf_cache_size": 4294967295, 00:22:35.115 "dif_insert_or_strip": false, 00:22:35.115 "zcopy": false, 00:22:35.115 "c2h_success": false, 00:22:35.115 "sock_priority": 0, 00:22:35.115 "abort_timeout_sec": 1, 00:22:35.115 "ack_timeout": 0, 00:22:35.115 "data_wr_pool_size": 0 00:22:35.115 } 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "method": "nvmf_create_subsystem", 00:22:35.115 "params": { 00:22:35.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.115 "allow_any_host": false, 00:22:35.115 "serial_number": "SPDK00000000000001", 00:22:35.115 "model_number": "SPDK bdev Controller", 00:22:35.115 "max_namespaces": 10, 00:22:35.115 "min_cntlid": 1, 00:22:35.115 "max_cntlid": 65519, 00:22:35.115 "ana_reporting": false 00:22:35.115 } 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "method": "nvmf_subsystem_add_host", 00:22:35.115 "params": { 00:22:35.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.115 "host": "nqn.2016-06.io.spdk:host1", 00:22:35.115 "psk": "/tmp/tmp.hW59mtHl8Q" 00:22:35.115 } 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "method": "nvmf_subsystem_add_ns", 00:22:35.115 "params": { 00:22:35.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.115 "namespace": { 00:22:35.115 "nsid": 1, 00:22:35.115 "bdev_name": "malloc0", 00:22:35.115 "nguid": "FDF04313ACFA4A4C90AF1C9B3C3BD2D9", 00:22:35.115 "uuid": "fdf04313-acfa-4a4c-90af-1c9b3c3bd2d9", 00:22:35.115 "no_auto_visible": false 00:22:35.115 } 00:22:35.115 } 00:22:35.115 }, 00:22:35.115 { 00:22:35.115 "method": "nvmf_subsystem_add_listener", 00:22:35.115 "params": { 00:22:35.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.115 "listen_address": { 00:22:35.115 "trtype": "TCP", 00:22:35.115 "adrfam": "IPv4", 00:22:35.115 "traddr": "10.0.0.2", 00:22:35.115 "trsvcid": "4420" 00:22:35.115 }, 00:22:35.115 "secure_channel": true 00:22:35.115 } 00:22:35.115 } 00:22:35.115 ] 00:22:35.115 } 00:22:35.115 ] 00:22:35.115 }' 00:22:35.115 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.115 07:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1556208 00:22:35.115 07:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:35.115 07:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1556208 00:22:35.115 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1556208 ']' 00:22:35.115 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.115 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.115 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.115 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.115 07:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.115 [2024-07-13 07:10:04.384878] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:35.115 [2024-07-13 07:10:04.384985] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.115 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.115 [2024-07-13 07:10:04.422469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:35.115 [2024-07-13 07:10:04.454174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.115 [2024-07-13 07:10:04.546485] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.115 [2024-07-13 07:10:04.546543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.115 [2024-07-13 07:10:04.546570] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.115 [2024-07-13 07:10:04.546585] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.115 [2024-07-13 07:10:04.546597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.115 [2024-07-13 07:10:04.546678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.373 [2024-07-13 07:10:04.780363] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.373 [2024-07-13 07:10:04.796309] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:35.373 [2024-07-13 07:10:04.812361] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:35.373 [2024-07-13 07:10:04.823071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1556294 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1556294 /var/tmp/bdevperf.sock 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1556294 ']' 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.940 07:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:35.940 "subsystems": [ 00:22:35.940 { 00:22:35.940 "subsystem": "keyring", 00:22:35.940 "config": [] 00:22:35.940 }, 00:22:35.940 { 00:22:35.940 "subsystem": "iobuf", 00:22:35.940 "config": [ 00:22:35.940 { 00:22:35.940 "method": "iobuf_set_options", 00:22:35.940 "params": { 00:22:35.940 "small_pool_count": 8192, 00:22:35.940 "large_pool_count": 1024, 00:22:35.940 "small_bufsize": 8192, 00:22:35.940 "large_bufsize": 135168 00:22:35.940 } 00:22:35.940 } 00:22:35.940 ] 00:22:35.940 }, 00:22:35.940 { 00:22:35.940 "subsystem": "sock", 00:22:35.940 "config": [ 00:22:35.940 { 00:22:35.940 "method": "sock_set_default_impl", 00:22:35.940 "params": { 00:22:35.940 "impl_name": "posix" 00:22:35.940 } 00:22:35.940 }, 00:22:35.940 { 00:22:35.940 "method": "sock_impl_set_options", 00:22:35.940 "params": { 00:22:35.940 "impl_name": "ssl", 00:22:35.940 "recv_buf_size": 4096, 00:22:35.940 "send_buf_size": 4096, 00:22:35.940 "enable_recv_pipe": true, 00:22:35.940 "enable_quickack": false, 00:22:35.940 "enable_placement_id": 0, 00:22:35.940 "enable_zerocopy_send_server": true, 00:22:35.940 "enable_zerocopy_send_client": false, 00:22:35.940 "zerocopy_threshold": 0, 00:22:35.940 "tls_version": 0, 00:22:35.940 "enable_ktls": false 00:22:35.940 } 00:22:35.940 }, 00:22:35.940 { 00:22:35.940 "method": "sock_impl_set_options", 00:22:35.940 "params": { 00:22:35.940 "impl_name": "posix", 00:22:35.940 "recv_buf_size": 2097152, 00:22:35.940 "send_buf_size": 2097152, 00:22:35.940 "enable_recv_pipe": true, 00:22:35.940 "enable_quickack": false, 00:22:35.940 "enable_placement_id": 0, 00:22:35.940 "enable_zerocopy_send_server": true, 00:22:35.940 "enable_zerocopy_send_client": false, 00:22:35.940 "zerocopy_threshold": 0, 00:22:35.940 "tls_version": 0, 00:22:35.940 "enable_ktls": false 00:22:35.940 } 00:22:35.940 } 00:22:35.940 ] 00:22:35.940 }, 00:22:35.940 { 00:22:35.940 "subsystem": "vmd", 00:22:35.940 "config": [] 00:22:35.941 }, 00:22:35.941 { 00:22:35.941 "subsystem": "accel", 00:22:35.941 "config": [ 00:22:35.941 { 00:22:35.941 "method": "accel_set_options", 00:22:35.941 "params": { 00:22:35.941 "small_cache_size": 128, 00:22:35.941 "large_cache_size": 16, 00:22:35.941 "task_count": 2048, 00:22:35.941 "sequence_count": 2048, 00:22:35.941 "buf_count": 2048 00:22:35.941 } 00:22:35.941 } 00:22:35.941 ] 00:22:35.941 }, 00:22:35.941 { 00:22:35.941 "subsystem": "bdev", 00:22:35.941 "config": [ 00:22:35.941 { 00:22:35.941 "method": "bdev_set_options", 00:22:35.941 "params": { 00:22:35.941 "bdev_io_pool_size": 65535, 00:22:35.941 "bdev_io_cache_size": 256, 00:22:35.941 "bdev_auto_examine": true, 00:22:35.941 "iobuf_small_cache_size": 128, 00:22:35.941 "iobuf_large_cache_size": 16 00:22:35.941 } 00:22:35.941 }, 00:22:35.941 { 00:22:35.941 "method": "bdev_raid_set_options", 00:22:35.941 "params": { 00:22:35.941 "process_window_size_kb": 1024 00:22:35.941 } 00:22:35.941 }, 00:22:35.941 { 00:22:35.941 "method": "bdev_iscsi_set_options", 00:22:35.941 "params": { 00:22:35.941 "timeout_sec": 30 00:22:35.941 } 00:22:35.941 }, 00:22:35.941 { 00:22:35.941 "method": "bdev_nvme_set_options", 00:22:35.941 "params": { 00:22:35.941 "action_on_timeout": "none", 00:22:35.941 "timeout_us": 0, 00:22:35.941 "timeout_admin_us": 0, 00:22:35.941 "keep_alive_timeout_ms": 10000, 00:22:35.941 "arbitration_burst": 0, 00:22:35.941 "low_priority_weight": 0, 00:22:35.941 "medium_priority_weight": 0, 00:22:35.941 "high_priority_weight": 0, 00:22:35.941 "nvme_adminq_poll_period_us": 10000, 00:22:35.941 "nvme_ioq_poll_period_us": 0, 00:22:35.941 "io_queue_requests": 512, 00:22:35.941 "delay_cmd_submit": true, 00:22:35.941 "transport_retry_count": 4, 00:22:35.941 "bdev_retry_count": 3, 00:22:35.941 "transport_ack_timeout": 0, 00:22:35.941 "ctrlr_loss_timeout_sec": 0, 00:22:35.941 "reconnect_delay_sec": 0, 00:22:35.941 "fast_io_fail_timeout_sec": 0, 00:22:35.941 "disable_auto_failback": false, 00:22:35.941 "generate_uuids": false, 00:22:35.941 "transport_tos": 0, 00:22:35.941 "nvme_error_stat": false, 00:22:35.941 "rdma_srq_size": 0, 00:22:35.941 "io_path_stat": false, 00:22:35.941 "allow_accel_sequence": false, 00:22:35.941 "rdma_max_cq_size": 0, 00:22:35.941 "rdma_cm_event_timeout_ms": 0, 00:22:35.941 "dhchap_digests": [ 00:22:35.941 "sha256", 00:22:35.941 "sha384", 00:22:35.941 "sha512" 00:22:35.941 ], 00:22:35.941 "dhchap_dhgroups": [ 00:22:35.941 "null", 00:22:35.941 "ffdhe2048", 00:22:35.941 "ffdhe3072", 00:22:35.941 "ffdhe4096", 00:22:35.941 "ffdhe6144", 00:22:35.941 "ffdhe8192" 00:22:35.941 ] 00:22:35.941 } 00:22:35.941 }, 00:22:35.941 { 00:22:35.941 "method": "bdev_nvme_attach_controller", 00:22:35.941 "params": { 00:22:35.941 "name": "TLSTEST", 00:22:35.941 "trtype": "TCP", 00:22:35.941 "adrfam": "IPv4", 00:22:35.941 "traddr": "10.0.0.2", 00:22:35.941 "trsvcid": "4420", 00:22:35.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.941 "prchk_reftag": false, 00:22:35.941 "prchk_guard": false, 00:22:35.941 "ctrlr_loss_timeout_sec": 0, 00:22:35.941 "reconnect_delay_sec": 0, 00:22:35.941 "fast_io_fail_timeout_sec": 0, 00:22:35.941 "psk": "/tmp/tmp.hW59mtHl8Q", 00:22:35.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.941 "hdgst": false, 00:22:35.941 "ddgst": false 00:22:35.941 } 00:22:35.941 }, 00:22:35.941 { 00:22:35.941 "method": "bdev_nvme_set_hotplug", 00:22:35.941 "params": { 00:22:35.941 "period_us": 100000, 00:22:35.941 "enable": false 00:22:35.941 } 00:22:35.941 }, 00:22:35.941 { 00:22:35.941 "method": "bdev_wait_for_examine" 00:22:35.941 } 00:22:35.941 ] 00:22:35.941 }, 00:22:35.941 { 00:22:35.941 "subsystem": "nbd", 00:22:35.941 "config": [] 00:22:35.941 } 00:22:35.941 ] 00:22:35.941 }' 00:22:35.941 [2024-07-13 07:10:05.376196] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:35.941 [2024-07-13 07:10:05.376282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556294 ] 00:22:36.199 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.199 [2024-07-13 07:10:05.409819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:36.199 [2024-07-13 07:10:05.439253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.199 [2024-07-13 07:10:05.528271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.458 [2024-07-13 07:10:05.692935] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.458 [2024-07-13 07:10:05.693051] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:37.023 07:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.023 07:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:37.023 07:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:37.281 Running I/O for 10 seconds... 00:22:47.260 00:22:47.260 Latency(us) 00:22:47.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.260 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:47.260 Verification LBA range: start 0x0 length 0x2000 00:22:47.260 TLSTESTn1 : 10.04 3215.33 12.56 0.00 0.00 39712.07 7475.96 68351.62 00:22:47.260 =================================================================================================================== 00:22:47.260 Total : 3215.33 12.56 0.00 0.00 39712.07 7475.96 68351.62 00:22:47.260 0 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1556294 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1556294 ']' 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1556294 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1556294 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1556294' 00:22:47.260 killing process with pid 1556294 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1556294 00:22:47.260 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.260 00:22:47.260 Latency(us) 00:22:47.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.260 =================================================================================================================== 00:22:47.260 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.260 [2024-07-13 07:10:16.607705] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.260 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1556294 00:22:47.518 07:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1556208 00:22:47.518 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1556208 ']' 00:22:47.518 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1556208 00:22:47.518 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:47.518 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.518 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1556208 00:22:47.518 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:47.518 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:47.518 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1556208' 00:22:47.518 killing process with pid 1556208 00:22:47.519 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1556208 00:22:47.519 [2024-07-13 07:10:16.858134] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:47.519 07:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1556208 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1557739 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1557739 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1557739 ']' 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.778 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.778 [2024-07-13 07:10:17.156044] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:47.778 [2024-07-13 07:10:17.156143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.778 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.778 [2024-07-13 07:10:17.193612] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:47.778 [2024-07-13 07:10:17.226273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.036 [2024-07-13 07:10:17.316804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.036 [2024-07-13 07:10:17.316882] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.036 [2024-07-13 07:10:17.316901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.036 [2024-07-13 07:10:17.316915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.036 [2024-07-13 07:10:17.316927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.036 [2024-07-13 07:10:17.316958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.036 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.036 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:48.036 07:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.036 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:48.036 07:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.036 07:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.036 07:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.hW59mtHl8Q 00:22:48.036 07:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hW59mtHl8Q 00:22:48.036 07:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:48.294 [2024-07-13 07:10:17.722235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.294 07:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:48.859 07:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:49.117 [2024-07-13 07:10:18.315886] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:49.117 [2024-07-13 07:10:18.316239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.117 07:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:49.374 malloc0 00:22:49.374 07:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:49.631 07:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hW59mtHl8Q 00:22:49.889 [2024-07-13 07:10:19.126453] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:49.889 07:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1557937 00:22:49.889 07:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:49.889 07:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.889 07:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1557937 /var/tmp/bdevperf.sock 00:22:49.889 07:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1557937 ']' 00:22:49.889 07:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.889 07:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:49.889 07:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.889 07:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:49.889 07:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.889 [2024-07-13 07:10:19.191252] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:49.889 [2024-07-13 07:10:19.191339] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557937 ] 00:22:49.889 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.889 [2024-07-13 07:10:19.224251] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:49.889 [2024-07-13 07:10:19.257124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.148 [2024-07-13 07:10:19.348296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.148 07:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.148 07:10:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:50.148 07:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hW59mtHl8Q 00:22:50.406 07:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:50.663 [2024-07-13 07:10:19.937066] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.663 nvme0n1 00:22:50.663 07:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:50.921 Running I/O for 1 seconds... 00:22:51.851 00:22:51.851 Latency(us) 00:22:51.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.851 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:51.851 Verification LBA range: start 0x0 length 0x2000 00:22:51.851 nvme0n1 : 1.04 2917.94 11.40 0.00 0.00 43120.72 8252.68 63302.92 00:22:51.851 =================================================================================================================== 00:22:51.851 Total : 2917.94 11.40 0.00 0.00 43120.72 8252.68 63302.92 00:22:51.851 0 00:22:51.851 07:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1557937 00:22:51.851 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1557937 ']' 00:22:51.851 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1557937 00:22:51.851 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:51.851 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:51.851 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1557937 00:22:51.852 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:51.852 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:51.852 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1557937' 00:22:51.852 killing process with pid 1557937 00:22:51.852 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1557937 00:22:51.852 Received shutdown signal, test time was about 1.000000 seconds 00:22:51.852 00:22:51.852 Latency(us) 00:22:51.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.852 =================================================================================================================== 00:22:51.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:51.852 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1557937 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1557739 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1557739 ']' 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1557739 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1557739 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1557739' 00:22:52.109 killing process with pid 1557739 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1557739 00:22:52.109 [2024-07-13 07:10:21.442974] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:52.109 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1557739 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1558303 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1558303 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1558303 ']' 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.367 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.367 [2024-07-13 07:10:21.720948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:52.367 [2024-07-13 07:10:21.721027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.367 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.367 [2024-07-13 07:10:21.758334] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:52.367 [2024-07-13 07:10:21.784503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.625 [2024-07-13 07:10:21.869316] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.625 [2024-07-13 07:10:21.869382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.625 [2024-07-13 07:10:21.869420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.625 [2024-07-13 07:10:21.869431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.625 [2024-07-13 07:10:21.869441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.625 [2024-07-13 07:10:21.869471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.625 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.625 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:52.625 07:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.625 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.625 07:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.625 [2024-07-13 07:10:22.010045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.625 malloc0 00:22:52.625 [2024-07-13 07:10:22.042618] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.625 [2024-07-13 07:10:22.042875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1558328 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1558328 /var/tmp/bdevperf.sock 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1558328 ']' 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.625 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.883 [2024-07-13 07:10:22.112600] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:52.883 [2024-07-13 07:10:22.112662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558328 ] 00:22:52.883 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.883 [2024-07-13 07:10:22.144954] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:52.883 [2024-07-13 07:10:22.175137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.883 [2024-07-13 07:10:22.266022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.140 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.140 07:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:53.140 07:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hW59mtHl8Q 00:22:53.397 07:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:53.653 [2024-07-13 07:10:22.935682] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.653 nvme0n1 00:22:53.653 07:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.911 Running I/O for 1 seconds... 00:22:54.844 00:22:54.844 Latency(us) 00:22:54.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.844 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:54.844 Verification LBA range: start 0x0 length 0x2000 00:22:54.844 nvme0n1 : 1.04 2288.86 8.94 0.00 0.00 55021.70 8738.13 82332.63 00:22:54.844 =================================================================================================================== 00:22:54.844 Total : 2288.86 8.94 0.00 0.00 55021.70 8738.13 82332.63 00:22:54.844 0 00:22:54.844 07:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:54.844 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.844 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.844 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.844 07:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:54.844 "subsystems": [ 00:22:54.844 { 00:22:54.844 "subsystem": "keyring", 00:22:54.844 "config": [ 00:22:54.844 { 00:22:54.844 "method": "keyring_file_add_key", 00:22:54.844 "params": { 00:22:54.844 "name": "key0", 00:22:54.844 "path": "/tmp/tmp.hW59mtHl8Q" 00:22:54.844 } 00:22:54.844 } 00:22:54.844 ] 00:22:54.844 }, 00:22:54.844 { 00:22:54.844 "subsystem": "iobuf", 00:22:54.844 "config": [ 00:22:54.844 { 00:22:54.844 "method": "iobuf_set_options", 00:22:54.844 "params": { 00:22:54.844 "small_pool_count": 8192, 00:22:54.844 "large_pool_count": 1024, 00:22:54.844 "small_bufsize": 8192, 00:22:54.844 "large_bufsize": 135168 00:22:54.844 } 00:22:54.844 } 00:22:54.844 ] 00:22:54.844 }, 00:22:54.844 { 00:22:54.844 "subsystem": "sock", 00:22:54.844 "config": [ 00:22:54.844 { 00:22:54.844 "method": "sock_set_default_impl", 00:22:54.844 "params": { 00:22:54.844 "impl_name": "posix" 00:22:54.844 } 00:22:54.844 }, 00:22:54.844 { 00:22:54.844 "method": "sock_impl_set_options", 00:22:54.844 "params": { 00:22:54.844 "impl_name": "ssl", 00:22:54.844 "recv_buf_size": 4096, 00:22:54.844 "send_buf_size": 4096, 00:22:54.844 "enable_recv_pipe": true, 00:22:54.845 "enable_quickack": false, 00:22:54.845 "enable_placement_id": 0, 00:22:54.845 "enable_zerocopy_send_server": true, 00:22:54.845 "enable_zerocopy_send_client": false, 00:22:54.845 "zerocopy_threshold": 0, 00:22:54.845 "tls_version": 0, 00:22:54.845 "enable_ktls": false 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "sock_impl_set_options", 00:22:54.845 "params": { 00:22:54.845 "impl_name": "posix", 00:22:54.845 "recv_buf_size": 2097152, 00:22:54.845 "send_buf_size": 2097152, 00:22:54.845 "enable_recv_pipe": true, 00:22:54.845 "enable_quickack": false, 00:22:54.845 "enable_placement_id": 0, 00:22:54.845 "enable_zerocopy_send_server": true, 00:22:54.845 "enable_zerocopy_send_client": false, 00:22:54.845 "zerocopy_threshold": 0, 00:22:54.845 "tls_version": 0, 00:22:54.845 "enable_ktls": false 00:22:54.845 } 00:22:54.845 } 00:22:54.845 ] 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "subsystem": "vmd", 00:22:54.845 "config": [] 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "subsystem": "accel", 00:22:54.845 "config": [ 00:22:54.845 { 00:22:54.845 "method": "accel_set_options", 00:22:54.845 "params": { 00:22:54.845 "small_cache_size": 128, 00:22:54.845 "large_cache_size": 16, 00:22:54.845 "task_count": 2048, 00:22:54.845 "sequence_count": 2048, 00:22:54.845 "buf_count": 2048 00:22:54.845 } 00:22:54.845 } 00:22:54.845 ] 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "subsystem": "bdev", 00:22:54.845 "config": [ 00:22:54.845 { 00:22:54.845 "method": "bdev_set_options", 00:22:54.845 "params": { 00:22:54.845 "bdev_io_pool_size": 65535, 00:22:54.845 "bdev_io_cache_size": 256, 00:22:54.845 "bdev_auto_examine": true, 00:22:54.845 "iobuf_small_cache_size": 128, 00:22:54.845 "iobuf_large_cache_size": 16 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "bdev_raid_set_options", 00:22:54.845 "params": { 00:22:54.845 "process_window_size_kb": 1024 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "bdev_iscsi_set_options", 00:22:54.845 "params": { 00:22:54.845 "timeout_sec": 30 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "bdev_nvme_set_options", 00:22:54.845 "params": { 00:22:54.845 "action_on_timeout": "none", 00:22:54.845 "timeout_us": 0, 00:22:54.845 "timeout_admin_us": 0, 00:22:54.845 "keep_alive_timeout_ms": 10000, 00:22:54.845 "arbitration_burst": 0, 00:22:54.845 "low_priority_weight": 0, 00:22:54.845 "medium_priority_weight": 0, 00:22:54.845 "high_priority_weight": 0, 00:22:54.845 "nvme_adminq_poll_period_us": 10000, 00:22:54.845 "nvme_ioq_poll_period_us": 0, 00:22:54.845 "io_queue_requests": 0, 00:22:54.845 "delay_cmd_submit": true, 00:22:54.845 "transport_retry_count": 4, 00:22:54.845 "bdev_retry_count": 3, 00:22:54.845 "transport_ack_timeout": 0, 00:22:54.845 "ctrlr_loss_timeout_sec": 0, 00:22:54.845 "reconnect_delay_sec": 0, 00:22:54.845 "fast_io_fail_timeout_sec": 0, 00:22:54.845 "disable_auto_failback": false, 00:22:54.845 "generate_uuids": false, 00:22:54.845 "transport_tos": 0, 00:22:54.845 "nvme_error_stat": false, 00:22:54.845 "rdma_srq_size": 0, 00:22:54.845 "io_path_stat": false, 00:22:54.845 "allow_accel_sequence": false, 00:22:54.845 "rdma_max_cq_size": 0, 00:22:54.845 "rdma_cm_event_timeout_ms": 0, 00:22:54.845 "dhchap_digests": [ 00:22:54.845 "sha256", 00:22:54.845 "sha384", 00:22:54.845 "sha512" 00:22:54.845 ], 00:22:54.845 "dhchap_dhgroups": [ 00:22:54.845 "null", 00:22:54.845 "ffdhe2048", 00:22:54.845 "ffdhe3072", 00:22:54.845 "ffdhe4096", 00:22:54.845 "ffdhe6144", 00:22:54.845 "ffdhe8192" 00:22:54.845 ] 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "bdev_nvme_set_hotplug", 00:22:54.845 "params": { 00:22:54.845 "period_us": 100000, 00:22:54.845 "enable": false 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "bdev_malloc_create", 00:22:54.845 "params": { 00:22:54.845 "name": "malloc0", 00:22:54.845 "num_blocks": 8192, 00:22:54.845 "block_size": 4096, 00:22:54.845 "physical_block_size": 4096, 00:22:54.845 "uuid": "66790d38-9908-4bf1-8409-d16fe81dbdb4", 00:22:54.845 "optimal_io_boundary": 0 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "bdev_wait_for_examine" 00:22:54.845 } 00:22:54.845 ] 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "subsystem": "nbd", 00:22:54.845 "config": [] 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "subsystem": "scheduler", 00:22:54.845 "config": [ 00:22:54.845 { 00:22:54.845 "method": "framework_set_scheduler", 00:22:54.845 "params": { 00:22:54.845 "name": "static" 00:22:54.845 } 00:22:54.845 } 00:22:54.845 ] 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "subsystem": "nvmf", 00:22:54.845 "config": [ 00:22:54.845 { 00:22:54.845 "method": "nvmf_set_config", 00:22:54.845 "params": { 00:22:54.845 "discovery_filter": "match_any", 00:22:54.845 "admin_cmd_passthru": { 00:22:54.845 "identify_ctrlr": false 00:22:54.845 } 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "nvmf_set_max_subsystems", 00:22:54.845 "params": { 00:22:54.845 "max_subsystems": 1024 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "nvmf_set_crdt", 00:22:54.845 "params": { 00:22:54.845 "crdt1": 0, 00:22:54.845 "crdt2": 0, 00:22:54.845 "crdt3": 0 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "nvmf_create_transport", 00:22:54.845 "params": { 00:22:54.845 "trtype": "TCP", 00:22:54.845 "max_queue_depth": 128, 00:22:54.845 "max_io_qpairs_per_ctrlr": 127, 00:22:54.845 "in_capsule_data_size": 4096, 00:22:54.845 "max_io_size": 131072, 00:22:54.845 "io_unit_size": 131072, 00:22:54.845 "max_aq_depth": 128, 00:22:54.845 "num_shared_buffers": 511, 00:22:54.845 "buf_cache_size": 4294967295, 00:22:54.845 "dif_insert_or_strip": false, 00:22:54.845 "zcopy": false, 00:22:54.845 "c2h_success": false, 00:22:54.845 "sock_priority": 0, 00:22:54.845 "abort_timeout_sec": 1, 00:22:54.845 "ack_timeout": 0, 00:22:54.845 "data_wr_pool_size": 0 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "nvmf_create_subsystem", 00:22:54.845 "params": { 00:22:54.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.845 "allow_any_host": false, 00:22:54.845 "serial_number": "00000000000000000000", 00:22:54.845 "model_number": "SPDK bdev Controller", 00:22:54.845 "max_namespaces": 32, 00:22:54.845 "min_cntlid": 1, 00:22:54.845 "max_cntlid": 65519, 00:22:54.845 "ana_reporting": false 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "nvmf_subsystem_add_host", 00:22:54.845 "params": { 00:22:54.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.845 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.845 "psk": "key0" 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "nvmf_subsystem_add_ns", 00:22:54.845 "params": { 00:22:54.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.845 "namespace": { 00:22:54.845 "nsid": 1, 00:22:54.845 "bdev_name": "malloc0", 00:22:54.845 "nguid": "66790D3899084BF18409D16FE81DBDB4", 00:22:54.845 "uuid": "66790d38-9908-4bf1-8409-d16fe81dbdb4", 00:22:54.845 "no_auto_visible": false 00:22:54.845 } 00:22:54.845 } 00:22:54.845 }, 00:22:54.845 { 00:22:54.845 "method": "nvmf_subsystem_add_listener", 00:22:54.845 "params": { 00:22:54.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.845 "listen_address": { 00:22:54.845 "trtype": "TCP", 00:22:54.845 "adrfam": "IPv4", 00:22:54.845 "traddr": "10.0.0.2", 00:22:54.845 "trsvcid": "4420" 00:22:54.845 }, 00:22:54.845 "secure_channel": true 00:22:54.845 } 00:22:54.845 } 00:22:54.845 ] 00:22:54.845 } 00:22:54.845 ] 00:22:54.845 }' 00:22:54.845 07:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:55.411 07:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:55.411 "subsystems": [ 00:22:55.411 { 00:22:55.411 "subsystem": "keyring", 00:22:55.411 "config": [ 00:22:55.411 { 00:22:55.411 "method": "keyring_file_add_key", 00:22:55.411 "params": { 00:22:55.411 "name": "key0", 00:22:55.411 "path": "/tmp/tmp.hW59mtHl8Q" 00:22:55.411 } 00:22:55.411 } 00:22:55.411 ] 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "subsystem": "iobuf", 00:22:55.411 "config": [ 00:22:55.411 { 00:22:55.411 "method": "iobuf_set_options", 00:22:55.411 "params": { 00:22:55.411 "small_pool_count": 8192, 00:22:55.411 "large_pool_count": 1024, 00:22:55.411 "small_bufsize": 8192, 00:22:55.411 "large_bufsize": 135168 00:22:55.411 } 00:22:55.411 } 00:22:55.411 ] 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "subsystem": "sock", 00:22:55.411 "config": [ 00:22:55.411 { 00:22:55.411 "method": "sock_set_default_impl", 00:22:55.411 "params": { 00:22:55.411 "impl_name": "posix" 00:22:55.411 } 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "method": "sock_impl_set_options", 00:22:55.411 "params": { 00:22:55.411 "impl_name": "ssl", 00:22:55.411 "recv_buf_size": 4096, 00:22:55.411 "send_buf_size": 4096, 00:22:55.411 "enable_recv_pipe": true, 00:22:55.411 "enable_quickack": false, 00:22:55.411 "enable_placement_id": 0, 00:22:55.411 "enable_zerocopy_send_server": true, 00:22:55.411 "enable_zerocopy_send_client": false, 00:22:55.411 "zerocopy_threshold": 0, 00:22:55.411 "tls_version": 0, 00:22:55.411 "enable_ktls": false 00:22:55.411 } 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "method": "sock_impl_set_options", 00:22:55.411 "params": { 00:22:55.411 "impl_name": "posix", 00:22:55.411 "recv_buf_size": 2097152, 00:22:55.411 "send_buf_size": 2097152, 00:22:55.411 "enable_recv_pipe": true, 00:22:55.411 "enable_quickack": false, 00:22:55.411 "enable_placement_id": 0, 00:22:55.411 "enable_zerocopy_send_server": true, 00:22:55.411 "enable_zerocopy_send_client": false, 00:22:55.411 "zerocopy_threshold": 0, 00:22:55.411 "tls_version": 0, 00:22:55.411 "enable_ktls": false 00:22:55.411 } 00:22:55.411 } 00:22:55.411 ] 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "subsystem": "vmd", 00:22:55.411 "config": [] 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "subsystem": "accel", 00:22:55.411 "config": [ 00:22:55.411 { 00:22:55.411 "method": "accel_set_options", 00:22:55.411 "params": { 00:22:55.411 "small_cache_size": 128, 00:22:55.411 "large_cache_size": 16, 00:22:55.411 "task_count": 2048, 00:22:55.411 "sequence_count": 2048, 00:22:55.411 "buf_count": 2048 00:22:55.411 } 00:22:55.411 } 00:22:55.411 ] 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "subsystem": "bdev", 00:22:55.411 "config": [ 00:22:55.411 { 00:22:55.411 "method": "bdev_set_options", 00:22:55.411 "params": { 00:22:55.411 "bdev_io_pool_size": 65535, 00:22:55.411 "bdev_io_cache_size": 256, 00:22:55.411 "bdev_auto_examine": true, 00:22:55.411 "iobuf_small_cache_size": 128, 00:22:55.411 "iobuf_large_cache_size": 16 00:22:55.411 } 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "method": "bdev_raid_set_options", 00:22:55.411 "params": { 00:22:55.411 "process_window_size_kb": 1024 00:22:55.411 } 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "method": "bdev_iscsi_set_options", 00:22:55.411 "params": { 00:22:55.411 "timeout_sec": 30 00:22:55.411 } 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "method": "bdev_nvme_set_options", 00:22:55.411 "params": { 00:22:55.411 "action_on_timeout": "none", 00:22:55.411 "timeout_us": 0, 00:22:55.411 "timeout_admin_us": 0, 00:22:55.411 "keep_alive_timeout_ms": 10000, 00:22:55.411 "arbitration_burst": 0, 00:22:55.411 "low_priority_weight": 0, 00:22:55.411 "medium_priority_weight": 0, 00:22:55.411 "high_priority_weight": 0, 00:22:55.411 "nvme_adminq_poll_period_us": 10000, 00:22:55.411 "nvme_ioq_poll_period_us": 0, 00:22:55.411 "io_queue_requests": 512, 00:22:55.411 "delay_cmd_submit": true, 00:22:55.411 "transport_retry_count": 4, 00:22:55.411 "bdev_retry_count": 3, 00:22:55.411 "transport_ack_timeout": 0, 00:22:55.411 "ctrlr_loss_timeout_sec": 0, 00:22:55.411 "reconnect_delay_sec": 0, 00:22:55.411 "fast_io_fail_timeout_sec": 0, 00:22:55.411 "disable_auto_failback": false, 00:22:55.411 "generate_uuids": false, 00:22:55.411 "transport_tos": 0, 00:22:55.411 "nvme_error_stat": false, 00:22:55.411 "rdma_srq_size": 0, 00:22:55.411 "io_path_stat": false, 00:22:55.411 "allow_accel_sequence": false, 00:22:55.411 "rdma_max_cq_size": 0, 00:22:55.411 "rdma_cm_event_timeout_ms": 0, 00:22:55.411 "dhchap_digests": [ 00:22:55.411 "sha256", 00:22:55.411 "sha384", 00:22:55.411 "sha512" 00:22:55.411 ], 00:22:55.411 "dhchap_dhgroups": [ 00:22:55.411 "null", 00:22:55.411 "ffdhe2048", 00:22:55.411 "ffdhe3072", 00:22:55.411 "ffdhe4096", 00:22:55.411 "ffdhe6144", 00:22:55.411 "ffdhe8192" 00:22:55.411 ] 00:22:55.411 } 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "method": "bdev_nvme_attach_controller", 00:22:55.411 "params": { 00:22:55.411 "name": "nvme0", 00:22:55.411 "trtype": "TCP", 00:22:55.411 "adrfam": "IPv4", 00:22:55.411 "traddr": "10.0.0.2", 00:22:55.411 "trsvcid": "4420", 00:22:55.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.411 "prchk_reftag": false, 00:22:55.411 "prchk_guard": false, 00:22:55.411 "ctrlr_loss_timeout_sec": 0, 00:22:55.411 "reconnect_delay_sec": 0, 00:22:55.411 "fast_io_fail_timeout_sec": 0, 00:22:55.411 "psk": "key0", 00:22:55.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.411 "hdgst": false, 00:22:55.411 "ddgst": false 00:22:55.411 } 00:22:55.411 }, 00:22:55.411 { 00:22:55.411 "method": "bdev_nvme_set_hotplug", 00:22:55.411 "params": { 00:22:55.411 "period_us": 100000, 00:22:55.411 "enable": false 00:22:55.412 } 00:22:55.412 }, 00:22:55.412 { 00:22:55.412 "method": "bdev_enable_histogram", 00:22:55.412 "params": { 00:22:55.412 "name": "nvme0n1", 00:22:55.412 "enable": true 00:22:55.412 } 00:22:55.412 }, 00:22:55.412 { 00:22:55.412 "method": "bdev_wait_for_examine" 00:22:55.412 } 00:22:55.412 ] 00:22:55.412 }, 00:22:55.412 { 00:22:55.412 "subsystem": "nbd", 00:22:55.412 "config": [] 00:22:55.412 } 00:22:55.412 ] 00:22:55.412 }' 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1558328 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1558328 ']' 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1558328 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1558328 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1558328' 00:22:55.412 killing process with pid 1558328 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1558328 00:22:55.412 Received shutdown signal, test time was about 1.000000 seconds 00:22:55.412 00:22:55.412 Latency(us) 00:22:55.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.412 =================================================================================================================== 00:22:55.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:55.412 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1558328 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1558303 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1558303 ']' 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1558303 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1558303 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1558303' 00:22:55.669 killing process with pid 1558303 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1558303 00:22:55.669 07:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1558303 00:22:55.928 07:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:55.928 07:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.928 07:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:55.928 "subsystems": [ 00:22:55.928 { 00:22:55.928 "subsystem": "keyring", 00:22:55.928 "config": [ 00:22:55.928 { 00:22:55.928 "method": "keyring_file_add_key", 00:22:55.928 "params": { 00:22:55.928 "name": "key0", 00:22:55.928 "path": "/tmp/tmp.hW59mtHl8Q" 00:22:55.928 } 00:22:55.928 } 00:22:55.928 ] 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "subsystem": "iobuf", 00:22:55.928 "config": [ 00:22:55.928 { 00:22:55.928 "method": "iobuf_set_options", 00:22:55.928 "params": { 00:22:55.928 "small_pool_count": 8192, 00:22:55.928 "large_pool_count": 1024, 00:22:55.928 "small_bufsize": 8192, 00:22:55.928 "large_bufsize": 135168 00:22:55.928 } 00:22:55.928 } 00:22:55.928 ] 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "subsystem": "sock", 00:22:55.928 "config": [ 00:22:55.928 { 00:22:55.928 "method": "sock_set_default_impl", 00:22:55.928 "params": { 00:22:55.928 "impl_name": "posix" 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "sock_impl_set_options", 00:22:55.928 "params": { 00:22:55.928 "impl_name": "ssl", 00:22:55.928 "recv_buf_size": 4096, 00:22:55.928 "send_buf_size": 4096, 00:22:55.928 "enable_recv_pipe": true, 00:22:55.928 "enable_quickack": false, 00:22:55.928 "enable_placement_id": 0, 00:22:55.928 "enable_zerocopy_send_server": true, 00:22:55.928 "enable_zerocopy_send_client": false, 00:22:55.928 "zerocopy_threshold": 0, 00:22:55.928 "tls_version": 0, 00:22:55.928 "enable_ktls": false 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "sock_impl_set_options", 00:22:55.928 "params": { 00:22:55.928 "impl_name": "posix", 00:22:55.928 "recv_buf_size": 2097152, 00:22:55.928 "send_buf_size": 2097152, 00:22:55.928 "enable_recv_pipe": true, 00:22:55.928 "enable_quickack": false, 00:22:55.928 "enable_placement_id": 0, 00:22:55.928 "enable_zerocopy_send_server": true, 00:22:55.928 "enable_zerocopy_send_client": false, 00:22:55.928 "zerocopy_threshold": 0, 00:22:55.928 "tls_version": 0, 00:22:55.928 "enable_ktls": false 00:22:55.928 } 00:22:55.928 } 00:22:55.928 ] 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "subsystem": "vmd", 00:22:55.928 "config": [] 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "subsystem": "accel", 00:22:55.928 "config": [ 00:22:55.928 { 00:22:55.928 "method": "accel_set_options", 00:22:55.928 "params": { 00:22:55.928 "small_cache_size": 128, 00:22:55.928 "large_cache_size": 16, 00:22:55.928 "task_count": 2048, 00:22:55.928 "sequence_count": 2048, 00:22:55.928 "buf_count": 2048 00:22:55.928 } 00:22:55.928 } 00:22:55.928 ] 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "subsystem": "bdev", 00:22:55.928 "config": [ 00:22:55.928 { 00:22:55.928 "method": "bdev_set_options", 00:22:55.928 "params": { 00:22:55.928 "bdev_io_pool_size": 65535, 00:22:55.928 "bdev_io_cache_size": 256, 00:22:55.928 "bdev_auto_examine": true, 00:22:55.928 "iobuf_small_cache_size": 128, 00:22:55.928 "iobuf_large_cache_size": 16 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "bdev_raid_set_options", 00:22:55.928 "params": { 00:22:55.928 "process_window_size_kb": 1024 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "bdev_iscsi_set_options", 00:22:55.928 "params": { 00:22:55.928 "timeout_sec": 30 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "bdev_nvme_set_options", 00:22:55.928 "params": { 00:22:55.928 "action_on_timeout": "none", 00:22:55.928 "timeout_us": 0, 00:22:55.928 "timeout_admin_us": 0, 00:22:55.928 "keep_alive_timeout_ms": 10000, 00:22:55.928 "arbitration_burst": 0, 00:22:55.928 "low_priority_weight": 0, 00:22:55.928 "medium_priority_weight": 0, 00:22:55.928 "high_priority_weight": 0, 00:22:55.928 "nvme_adminq_poll_period_us": 10000, 00:22:55.928 "nvme_ioq_poll_period_us": 0, 00:22:55.928 "io_queue_requests": 0, 00:22:55.928 "delay_cmd_submit": true, 00:22:55.928 "transport_retry_count": 4, 00:22:55.928 "bdev_retry_count": 3, 00:22:55.928 "transport_ack_timeout": 0, 00:22:55.928 "ctrlr_loss_timeout_sec": 0, 00:22:55.928 "reconnect_delay_sec": 0, 00:22:55.928 "fast_io_fail_timeout_sec": 0, 00:22:55.928 "disable_auto_failback": false, 00:22:55.928 "generate_uuids": false, 00:22:55.928 "transport_tos": 0, 00:22:55.928 "nvme_error_stat": false, 00:22:55.928 "rdma_srq_size": 0, 00:22:55.928 "io_path_stat": false, 00:22:55.928 "allow_accel_sequence": false, 00:22:55.928 "rdma_max_cq_size": 0, 00:22:55.928 "rdma_cm_event_timeout_ms": 0, 00:22:55.928 "dhchap_digests": [ 00:22:55.928 "sha256", 00:22:55.928 "sha384", 00:22:55.928 "sha512" 00:22:55.928 ], 00:22:55.928 "dhchap_dhgroups": [ 00:22:55.928 "null", 00:22:55.928 "ffdhe2048", 00:22:55.928 "ffdhe3072", 00:22:55.928 "ffdhe4096", 00:22:55.928 "ffdhe6144", 00:22:55.928 "ffdhe8192" 00:22:55.928 ] 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "bdev_nvme_set_hotplug", 00:22:55.928 "params": { 00:22:55.928 "period_us": 100000, 00:22:55.928 "enable": false 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "bdev_malloc_create", 00:22:55.928 "params": { 00:22:55.928 "name": "malloc0", 00:22:55.928 "num_blocks": 8192, 00:22:55.928 "block_size": 4096, 00:22:55.928 "physical_block_size": 4096, 00:22:55.928 "uuid": "66790d38-9908-4bf1-8409-d16fe81dbdb4", 00:22:55.928 "optimal_io_boundary": 0 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "bdev_wait_for_examine" 00:22:55.928 } 00:22:55.928 ] 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "subsystem": "nbd", 00:22:55.928 "config": [] 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "subsystem": "scheduler", 00:22:55.928 "config": [ 00:22:55.928 { 00:22:55.928 "method": "framework_set_scheduler", 00:22:55.928 "params": { 00:22:55.928 "name": "static" 00:22:55.928 } 00:22:55.928 } 00:22:55.928 ] 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "subsystem": "nvmf", 00:22:55.928 "config": [ 00:22:55.928 { 00:22:55.928 "method": "nvmf_set_config", 00:22:55.928 "params": { 00:22:55.928 "discovery_filter": "match_any", 00:22:55.928 "admin_cmd_passthru": { 00:22:55.928 "identify_ctrlr": false 00:22:55.928 } 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "nvmf_set_max_subsystems", 00:22:55.928 "params": { 00:22:55.928 "max_subsystems": 1024 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "nvmf_set_crdt", 00:22:55.928 "params": { 00:22:55.928 "crdt1": 0, 00:22:55.928 "crdt2": 0, 00:22:55.928 "crdt3": 0 00:22:55.928 } 00:22:55.928 }, 00:22:55.928 { 00:22:55.928 "method": "nvmf_create_transport", 00:22:55.928 "params": { 00:22:55.928 "trtype": "TCP", 00:22:55.929 "max_queue_depth": 128, 00:22:55.929 "max_io_qpairs_per_ctrlr": 127, 00:22:55.929 "in_capsule_data_size": 4096, 00:22:55.929 "max_io_size": 131072, 00:22:55.929 "io_unit_size": 131072, 00:22:55.929 "max_aq_depth": 128, 00:22:55.929 "num_shared_buffers": 511, 00:22:55.929 "buf_cache_size": 4294967295, 00:22:55.929 "dif_insert_or_strip": false, 00:22:55.929 "zcopy": false, 00:22:55.929 "c2h_success": false, 00:22:55.929 "sock_priority": 0, 00:22:55.929 "abort_timeout_sec": 1, 00:22:55.929 "ack_timeout": 0, 00:22:55.929 "data_wr_pool_size": 0 00:22:55.929 } 00:22:55.929 }, 00:22:55.929 { 00:22:55.929 "method": "nvmf_create_subsystem", 00:22:55.929 "params": { 00:22:55.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.929 "allow_any_host": false, 00:22:55.929 "serial_number": "00000000000000000000", 00:22:55.929 "model_number": "SPDK bdev Controller", 00:22:55.929 "max_namespaces": 32, 00:22:55.929 "min_cntlid": 1, 00:22:55.929 "max_cntlid": 65519, 00:22:55.929 "ana_reporting": false 00:22:55.929 } 00:22:55.929 }, 00:22:55.929 { 00:22:55.929 "method": "nvmf_subsystem_add_host", 00:22:55.929 "params": { 00:22:55.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.929 "host": "nqn.2016-06.io.spdk:host1", 00:22:55.929 "psk": "key0" 00:22:55.929 } 00:22:55.929 }, 00:22:55.929 { 00:22:55.929 "method": "nvmf_subsystem_add_ns", 00:22:55.929 "params": { 00:22:55.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.929 "namespace": { 00:22:55.929 "nsid": 1, 00:22:55.929 "bdev_name": "malloc0", 00:22:55.929 "nguid": "66790D3899084BF18409D16FE81DBDB4", 00:22:55.929 "uuid": "66790d38-9908-4bf1-8409-d16fe81dbdb4", 00:22:55.929 "no_auto_visible": false 00:22:55.929 } 00:22:55.929 } 00:22:55.929 }, 00:22:55.929 { 00:22:55.929 "method": "nvmf_subsystem_add_listener", 00:22:55.929 "params": { 00:22:55.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.929 "listen_address": { 00:22:55.929 "trtype": "TCP", 00:22:55.929 "adrfam": "IPv4", 00:22:55.929 "traddr": "10.0.0.2", 00:22:55.929 "trsvcid": "4420" 00:22:55.929 }, 00:22:55.929 "secure_channel": true 00:22:55.929 } 00:22:55.929 } 00:22:55.929 ] 00:22:55.929 } 00:22:55.929 ] 00:22:55.929 }' 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1558738 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1558738 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1558738 ']' 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.929 07:10:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.929 [2024-07-13 07:10:25.224721] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:55.929 [2024-07-13 07:10:25.224813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.929 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.929 [2024-07-13 07:10:25.260680] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:55.929 [2024-07-13 07:10:25.288146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.929 [2024-07-13 07:10:25.372687] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.929 [2024-07-13 07:10:25.372746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.929 [2024-07-13 07:10:25.372768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.929 [2024-07-13 07:10:25.372779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.929 [2024-07-13 07:10:25.372788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.929 [2024-07-13 07:10:25.372900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.187 [2024-07-13 07:10:25.607224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.187 [2024-07-13 07:10:25.639247] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:56.445 [2024-07-13 07:10:25.653069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1558886 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1558886 /var/tmp/bdevperf.sock 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1558886 ']' 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.011 07:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:57.011 "subsystems": [ 00:22:57.011 { 00:22:57.011 "subsystem": "keyring", 00:22:57.011 "config": [ 00:22:57.011 { 00:22:57.011 "method": "keyring_file_add_key", 00:22:57.011 "params": { 00:22:57.011 "name": "key0", 00:22:57.011 "path": "/tmp/tmp.hW59mtHl8Q" 00:22:57.011 } 00:22:57.011 } 00:22:57.011 ] 00:22:57.011 }, 00:22:57.011 { 00:22:57.011 "subsystem": "iobuf", 00:22:57.011 "config": [ 00:22:57.011 { 00:22:57.011 "method": "iobuf_set_options", 00:22:57.011 "params": { 00:22:57.011 "small_pool_count": 8192, 00:22:57.011 "large_pool_count": 1024, 00:22:57.011 "small_bufsize": 8192, 00:22:57.011 "large_bufsize": 135168 00:22:57.011 } 00:22:57.011 } 00:22:57.011 ] 00:22:57.011 }, 00:22:57.011 { 00:22:57.011 "subsystem": "sock", 00:22:57.011 "config": [ 00:22:57.011 { 00:22:57.011 "method": "sock_set_default_impl", 00:22:57.011 "params": { 00:22:57.011 "impl_name": "posix" 00:22:57.011 } 00:22:57.011 }, 00:22:57.011 { 00:22:57.011 "method": "sock_impl_set_options", 00:22:57.011 "params": { 00:22:57.011 "impl_name": "ssl", 00:22:57.011 "recv_buf_size": 4096, 00:22:57.011 "send_buf_size": 4096, 00:22:57.011 "enable_recv_pipe": true, 00:22:57.011 "enable_quickack": false, 00:22:57.011 "enable_placement_id": 0, 00:22:57.011 "enable_zerocopy_send_server": true, 00:22:57.011 "enable_zerocopy_send_client": false, 00:22:57.011 "zerocopy_threshold": 0, 00:22:57.011 "tls_version": 0, 00:22:57.011 "enable_ktls": false 00:22:57.011 } 00:22:57.011 }, 00:22:57.011 { 00:22:57.011 "method": "sock_impl_set_options", 00:22:57.011 "params": { 00:22:57.011 "impl_name": "posix", 00:22:57.011 "recv_buf_size": 2097152, 00:22:57.011 "send_buf_size": 2097152, 00:22:57.011 "enable_recv_pipe": true, 00:22:57.011 "enable_quickack": false, 00:22:57.011 "enable_placement_id": 0, 00:22:57.011 "enable_zerocopy_send_server": true, 00:22:57.011 "enable_zerocopy_send_client": false, 00:22:57.011 "zerocopy_threshold": 0, 00:22:57.011 "tls_version": 0, 00:22:57.011 "enable_ktls": false 00:22:57.011 } 00:22:57.011 } 00:22:57.011 ] 00:22:57.011 }, 00:22:57.011 { 00:22:57.011 "subsystem": "vmd", 00:22:57.012 "config": [] 00:22:57.012 }, 00:22:57.012 { 00:22:57.012 "subsystem": "accel", 00:22:57.012 "config": [ 00:22:57.012 { 00:22:57.012 "method": "accel_set_options", 00:22:57.012 "params": { 00:22:57.012 "small_cache_size": 128, 00:22:57.012 "large_cache_size": 16, 00:22:57.012 "task_count": 2048, 00:22:57.012 "sequence_count": 2048, 00:22:57.012 "buf_count": 2048 00:22:57.012 } 00:22:57.012 } 00:22:57.012 ] 00:22:57.012 }, 00:22:57.012 { 00:22:57.012 "subsystem": "bdev", 00:22:57.012 "config": [ 00:22:57.012 { 00:22:57.012 "method": "bdev_set_options", 00:22:57.012 "params": { 00:22:57.012 "bdev_io_pool_size": 65535, 00:22:57.012 "bdev_io_cache_size": 256, 00:22:57.012 "bdev_auto_examine": true, 00:22:57.012 "iobuf_small_cache_size": 128, 00:22:57.012 "iobuf_large_cache_size": 16 00:22:57.012 } 00:22:57.012 }, 00:22:57.012 { 00:22:57.012 "method": "bdev_raid_set_options", 00:22:57.012 "params": { 00:22:57.012 "process_window_size_kb": 1024 00:22:57.012 } 00:22:57.012 }, 00:22:57.012 { 00:22:57.012 "method": "bdev_iscsi_set_options", 00:22:57.012 "params": { 00:22:57.012 "timeout_sec": 30 00:22:57.012 } 00:22:57.012 }, 00:22:57.012 { 00:22:57.012 "method": "bdev_nvme_set_options", 00:22:57.012 "params": { 00:22:57.012 "action_on_timeout": "none", 00:22:57.012 "timeout_us": 0, 00:22:57.012 "timeout_admin_us": 0, 00:22:57.012 "keep_alive_timeout_ms": 10000, 00:22:57.012 "arbitration_burst": 0, 00:22:57.012 "low_priority_weight": 0, 00:22:57.012 "medium_priority_weight": 0, 00:22:57.012 "high_priority_weight": 0, 00:22:57.012 "nvme_adminq_poll_period_us": 10000, 00:22:57.012 "nvme_ioq_poll_period_us": 0, 00:22:57.012 "io_queue_requests": 512, 00:22:57.012 "delay_cmd_submit": true, 00:22:57.012 "transport_retry_count": 4, 00:22:57.012 "bdev_retry_count": 3, 00:22:57.012 "transport_ack_timeout": 0, 00:22:57.012 "ctrlr_loss_timeout_sec": 0, 00:22:57.012 "reconnect_delay_sec": 0, 00:22:57.012 "fast_io_fail_timeout_sec": 0, 00:22:57.012 "disable_auto_failback": false, 00:22:57.012 "generate_uuids": false, 00:22:57.012 "transport_tos": 0, 00:22:57.012 "nvme_error_stat": false, 00:22:57.012 "rdma_srq_size": 0, 00:22:57.012 "io_path_stat": false, 00:22:57.012 "allow_accel_sequence": false, 00:22:57.012 "rdma_max_cq_size": 0, 00:22:57.012 "rdma_cm_event_timeout_ms": 0, 00:22:57.012 "dhchap_digests": [ 00:22:57.012 "sha256", 00:22:57.012 "sha384", 00:22:57.012 "sha512" 00:22:57.012 ], 00:22:57.012 "dhchap_dhgroups": [ 00:22:57.012 "null", 00:22:57.012 "ffdhe2048", 00:22:57.012 "ffdhe3072", 00:22:57.012 "ffdhe4096", 00:22:57.012 "ffdhe6144", 00:22:57.012 "ffdhe8192" 00:22:57.012 ] 00:22:57.012 } 00:22:57.012 }, 00:22:57.012 { 00:22:57.012 "method": "bdev_nvme_attach_controller", 00:22:57.012 "params": { 00:22:57.012 "name": "nvme0", 00:22:57.012 "trtype": "TCP", 00:22:57.012 "adrfam": "IPv4", 00:22:57.012 "traddr": "10.0.0.2", 00:22:57.012 "trsvcid": "4420", 00:22:57.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.012 "prchk_reftag": false, 00:22:57.012 "prchk_guard": false, 00:22:57.012 "ctrlr_loss_timeout_sec": 0, 00:22:57.012 "reconnect_delay_sec": 0, 00:22:57.012 "fast_io_fail_timeout_sec": 0, 00:22:57.012 "psk": "key0", 00:22:57.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.012 "hdgst": false, 00:22:57.012 "ddgst": false 00:22:57.012 } 00:22:57.012 }, 00:22:57.012 { 00:22:57.012 "method": "bdev_nvme_set_hotplug", 00:22:57.012 "params": { 00:22:57.012 "period_us": 100000, 00:22:57.012 "enable": false 00:22:57.012 } 00:22:57.012 }, 00:22:57.012 { 00:22:57.012 "method": "bdev_enable_histogram", 00:22:57.012 "params": { 00:22:57.012 "name": "nvme0n1", 00:22:57.012 "enable": true 00:22:57.012 } 00:22:57.012 }, 00:22:57.012 { 00:22:57.012 "method": "bdev_wait_for_examine" 00:22:57.012 } 00:22:57.012 ] 00:22:57.012 }, 00:22:57.012 { 00:22:57.012 "subsystem": "nbd", 00:22:57.012 "config": [] 00:22:57.012 } 00:22:57.012 ] 00:22:57.012 }' 00:22:57.012 07:10:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.012 07:10:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.012 07:10:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.012 [2024-07-13 07:10:26.240911] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:57.012 [2024-07-13 07:10:26.240990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558886 ] 00:22:57.012 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.012 [2024-07-13 07:10:26.272919] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:57.012 [2024-07-13 07:10:26.304008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.012 [2024-07-13 07:10:26.394294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.269 [2024-07-13 07:10:26.572027] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.836 07:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.836 07:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:57.836 07:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.836 07:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:58.108 07:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.108 07:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.369 Running I/O for 1 seconds... 00:22:59.305 00:22:59.305 Latency(us) 00:22:59.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.305 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:59.305 Verification LBA range: start 0x0 length 0x2000 00:22:59.305 nvme0n1 : 1.04 2905.63 11.35 0.00 0.00 43276.57 8107.05 68739.98 00:22:59.305 =================================================================================================================== 00:22:59.305 Total : 2905.63 11.35 0.00 0.00 43276.57 8107.05 68739.98 00:22:59.305 0 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:59.305 nvmf_trace.0 00:22:59.305 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1558886 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1558886 ']' 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1558886 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1558886 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1558886' 00:22:59.306 killing process with pid 1558886 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1558886 00:22:59.306 Received shutdown signal, test time was about 1.000000 seconds 00:22:59.306 00:22:59.306 Latency(us) 00:22:59.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.306 =================================================================================================================== 00:22:59.306 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.306 07:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1558886 00:22:59.563 07:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:59.563 07:10:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:59.563 07:10:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:59.563 07:10:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:59.563 07:10:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:59.563 07:10:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:59.563 07:10:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:59.563 rmmod nvme_tcp 00:22:59.563 rmmod nvme_fabrics 00:22:59.563 rmmod nvme_keyring 00:22:59.563 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:59.563 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:59.563 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:59.563 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1558738 ']' 00:22:59.563 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1558738 00:22:59.563 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1558738 ']' 00:22:59.563 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1558738 00:22:59.563 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:59.563 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.563 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1558738 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1558738' 00:22:59.822 killing process with pid 1558738 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1558738 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1558738 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.822 07:10:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.353 07:10:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:02.353 07:10:31 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.wqJMhAQy9J /tmp/tmp.Eabve2e4HJ /tmp/tmp.hW59mtHl8Q 00:23:02.353 00:23:02.353 real 1m19.321s 00:23:02.353 user 2m8.214s 00:23:02.353 sys 0m26.822s 00:23:02.353 07:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:02.353 07:10:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.353 ************************************ 00:23:02.353 END TEST nvmf_tls 00:23:02.353 ************************************ 00:23:02.353 07:10:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:02.353 07:10:31 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:02.353 07:10:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:02.353 07:10:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.353 07:10:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:02.353 ************************************ 00:23:02.353 START TEST nvmf_fips 00:23:02.353 ************************************ 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:02.353 * Looking for test storage... 00:23:02.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.353 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:02.354 Error setting digest 00:23:02.354 0032438B697F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:02.354 0032438B697F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.354 07:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:04.253 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.253 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.253 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.253 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.253 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:04.254 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:04.254 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:04.254 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:04.254 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:23:04.254 00:23:04.254 --- 10.0.0.2 ping statistics --- 00:23:04.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.254 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:23:04.254 00:23:04.254 --- 10.0.0.1 ping statistics --- 00:23:04.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.254 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.254 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1561128 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1561128 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1561128 ']' 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.255 07:10:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:04.512 [2024-07-13 07:10:33.755276] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:04.512 [2024-07-13 07:10:33.755377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.512 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.512 [2024-07-13 07:10:33.795498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:04.512 [2024-07-13 07:10:33.821660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.512 [2024-07-13 07:10:33.907962] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.512 [2024-07-13 07:10:33.908019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.512 [2024-07-13 07:10:33.908048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.512 [2024-07-13 07:10:33.908060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.512 [2024-07-13 07:10:33.908070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.512 [2024-07-13 07:10:33.908097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:04.770 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:05.028 [2024-07-13 07:10:34.278665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.028 [2024-07-13 07:10:34.294650] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.028 [2024-07-13 07:10:34.294885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.028 [2024-07-13 07:10:34.326471] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:05.028 malloc0 00:23:05.028 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.028 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1561276 00:23:05.028 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:05.028 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1561276 /var/tmp/bdevperf.sock 00:23:05.028 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1561276 ']' 00:23:05.028 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.028 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.028 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.028 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.028 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:05.028 [2024-07-13 07:10:34.420048] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:05.028 [2024-07-13 07:10:34.420127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1561276 ] 00:23:05.028 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.028 [2024-07-13 07:10:34.452787] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:05.028 [2024-07-13 07:10:34.481877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.286 [2024-07-13 07:10:34.570873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.286 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.286 07:10:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:05.286 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:05.545 [2024-07-13 07:10:34.906811] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.545 [2024-07-13 07:10:34.906975] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:05.545 TLSTESTn1 00:23:05.803 07:10:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:05.803 Running I/O for 10 seconds... 00:23:15.770 00:23:15.770 Latency(us) 00:23:15.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.770 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:15.770 Verification LBA range: start 0x0 length 0x2000 00:23:15.770 TLSTESTn1 : 10.04 3293.27 12.86 0.00 0.00 38773.80 6262.33 73011.96 00:23:15.770 =================================================================================================================== 00:23:15.770 Total : 3293.27 12.86 0.00 0.00 38773.80 6262.33 73011.96 00:23:15.770 0 00:23:15.770 07:10:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:15.770 07:10:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:15.770 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:23:15.770 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:23:15.770 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:15.770 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:15.770 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:15.770 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:15.770 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:15.770 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:15.770 nvmf_trace.0 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1561276 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1561276 ']' 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1561276 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1561276 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1561276' 00:23:16.028 killing process with pid 1561276 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1561276 00:23:16.028 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.028 00:23:16.028 Latency(us) 00:23:16.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.028 =================================================================================================================== 00:23:16.028 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.028 [2024-07-13 07:10:45.300994] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:16.028 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1561276 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:16.284 rmmod nvme_tcp 00:23:16.284 rmmod nvme_fabrics 00:23:16.284 rmmod nvme_keyring 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1561128 ']' 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1561128 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1561128 ']' 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1561128 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1561128 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1561128' 00:23:16.284 killing process with pid 1561128 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1561128 00:23:16.284 [2024-07-13 07:10:45.614454] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:16.284 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1561128 00:23:16.541 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:16.541 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:16.541 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:16.541 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.541 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.541 07:10:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.541 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.541 07:10:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:19.070 00:23:19.070 real 0m16.543s 00:23:19.070 user 0m20.928s 00:23:19.070 sys 0m5.899s 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:19.070 ************************************ 00:23:19.070 END TEST nvmf_fips 00:23:19.070 ************************************ 00:23:19.070 07:10:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:19.070 07:10:47 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:19.070 07:10:47 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:19.070 07:10:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:19.070 07:10:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.070 07:10:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.070 ************************************ 00:23:19.070 START TEST nvmf_fuzz 00:23:19.070 ************************************ 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:19.070 * Looking for test storage... 00:23:19.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.070 07:10:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.070 07:10:48 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.071 07:10:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:20.470 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.470 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:20.729 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:20.729 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:20.729 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.729 07:10:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:20.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:23:20.729 00:23:20.729 --- 10.0.0.2 ping statistics --- 00:23:20.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.729 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:23:20.729 00:23:20.729 --- 10.0.0.1 ping statistics --- 00:23:20.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.729 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1564517 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1564517 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1564517 ']' 00:23:20.729 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.730 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.730 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.730 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.730 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:20.987 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.987 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:23:20.987 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:20.987 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.987 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:20.987 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.987 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:20.987 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.987 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:21.245 Malloc0 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:21.245 07:10:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:53.308 Fuzzing completed. Shutting down the fuzz application 00:23:53.308 00:23:53.308 Dumping successful admin opcodes: 00:23:53.308 8, 9, 10, 24, 00:23:53.308 Dumping successful io opcodes: 00:23:53.308 0, 9, 00:23:53.308 NS: 0x200003aeff00 I/O qp, Total commands completed: 463308, total successful commands: 2680, random_seed: 4108492096 00:23:53.308 NS: 0x200003aeff00 admin qp, Total commands completed: 57728, total successful commands: 462, random_seed: 627358272 00:23:53.308 07:11:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:53.308 Fuzzing completed. Shutting down the fuzz application 00:23:53.308 00:23:53.308 Dumping successful admin opcodes: 00:23:53.308 24, 00:23:53.308 Dumping successful io opcodes: 00:23:53.308 00:23:53.308 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1877384594 00:23:53.308 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1877521430 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.308 rmmod nvme_tcp 00:23:53.308 rmmod nvme_fabrics 00:23:53.308 rmmod nvme_keyring 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1564517 ']' 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1564517 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1564517 ']' 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1564517 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1564517 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1564517' 00:23:53.308 killing process with pid 1564517 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1564517 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1564517 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.308 07:11:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.835 07:11:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.835 07:11:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:55.835 00:23:55.835 real 0m36.783s 00:23:55.835 user 0m50.812s 00:23:55.835 sys 0m15.394s 00:23:55.835 07:11:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:55.835 07:11:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.835 ************************************ 00:23:55.835 END TEST nvmf_fuzz 00:23:55.835 ************************************ 00:23:55.835 07:11:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:55.835 07:11:24 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:55.835 07:11:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:55.835 07:11:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.835 07:11:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.835 ************************************ 00:23:55.835 START TEST nvmf_multiconnection 00:23:55.835 ************************************ 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:55.835 * Looking for test storage... 00:23:55.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:55.835 07:11:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.836 07:11:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.208 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.208 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.208 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:57.209 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:57.209 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:57.209 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:57.209 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.209 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:23:57.467 00:23:57.467 --- 10.0.0.2 ping statistics --- 00:23:57.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.467 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:23:57.467 00:23:57.467 --- 10.0.0.1 ping statistics --- 00:23:57.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.467 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1570112 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1570112 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1570112 ']' 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.467 07:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.467 [2024-07-13 07:11:26.867702] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:57.468 [2024-07-13 07:11:26.867776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.468 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.468 [2024-07-13 07:11:26.906513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:57.726 [2024-07-13 07:11:26.934462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.726 [2024-07-13 07:11:27.020066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.726 [2024-07-13 07:11:27.020120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.726 [2024-07-13 07:11:27.020144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.726 [2024-07-13 07:11:27.020168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.726 [2024-07-13 07:11:27.020178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.726 [2024-07-13 07:11:27.020271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.726 [2024-07-13 07:11:27.020335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.726 [2024-07-13 07:11:27.020403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.726 [2024-07-13 07:11:27.020405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.726 [2024-07-13 07:11:27.156516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.726 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.984 Malloc1 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.984 [2024-07-13 07:11:27.212015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.984 Malloc2 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.984 Malloc3 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.984 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 Malloc4 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 Malloc5 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.985 Malloc6 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.985 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 Malloc7 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 Malloc8 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 Malloc9 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 Malloc10 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 Malloc11 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.243 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:58.499 07:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.499 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:58.499 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.499 07:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:59.062 07:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:59.062 07:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.062 07:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:59.062 07:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:59.062 07:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:00.963 07:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:00.963 07:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:00.963 07:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:00.963 07:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:00.963 07:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:00.963 07:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:00.963 07:11:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.963 07:11:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:01.943 07:11:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:01.943 07:11:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:01.943 07:11:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:01.943 07:11:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:01.943 07:11:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:03.836 07:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:03.836 07:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:03.836 07:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:03.836 07:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:03.836 07:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:03.836 07:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:03.836 07:11:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:03.836 07:11:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:04.400 07:11:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:04.401 07:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:04.401 07:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:04.401 07:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:04.401 07:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:06.295 07:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:06.295 07:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:06.295 07:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:06.295 07:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:06.295 07:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:06.295 07:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:06.295 07:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.295 07:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:07.227 07:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:07.227 07:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:07.227 07:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:07.227 07:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:07.227 07:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:09.124 07:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:09.124 07:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:09.124 07:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:09.124 07:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:09.124 07:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:09.124 07:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:09.124 07:11:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.124 07:11:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:10.056 07:11:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:10.056 07:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:10.056 07:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:10.056 07:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:10.056 07:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:11.952 07:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:11.952 07:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:11.952 07:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:11.952 07:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:11.952 07:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:11.952 07:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:11.952 07:11:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.952 07:11:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:12.517 07:11:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:12.517 07:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:12.517 07:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:12.517 07:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:12.517 07:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:15.039 07:11:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:15.039 07:11:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:15.039 07:11:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:15.039 07:11:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:15.039 07:11:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:15.039 07:11:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:15.039 07:11:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.039 07:11:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:15.296 07:11:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:15.296 07:11:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:15.296 07:11:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:15.296 07:11:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:15.296 07:11:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:17.857 07:11:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:17.857 07:11:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:17.857 07:11:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:17.857 07:11:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:17.857 07:11:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:17.857 07:11:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:17.857 07:11:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.858 07:11:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:18.421 07:11:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:18.421 07:11:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:18.421 07:11:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:18.422 07:11:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:18.422 07:11:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:20.318 07:11:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:20.318 07:11:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:20.318 07:11:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:20.318 07:11:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:20.318 07:11:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:20.318 07:11:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:20.318 07:11:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.318 07:11:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:21.249 07:11:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:21.249 07:11:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:21.249 07:11:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.249 07:11:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:21.249 07:11:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:23.144 07:11:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:23.144 07:11:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:23.144 07:11:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:23.144 07:11:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:23.144 07:11:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.144 07:11:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:23.144 07:11:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.144 07:11:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:24.074 07:11:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:24.074 07:11:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:24.074 07:11:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:24.074 07:11:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:24.074 07:11:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:25.966 07:11:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:25.966 07:11:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:25.966 07:11:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:25.966 07:11:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:25.966 07:11:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.966 07:11:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:25.966 07:11:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.966 07:11:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:27.337 07:11:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:27.337 07:11:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:27.337 07:11:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:27.337 07:11:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:27.337 07:11:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:29.234 07:11:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:29.234 07:11:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:29.234 07:11:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:29.234 07:11:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:29.234 07:11:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:29.234 07:11:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:29.234 07:11:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:29.234 [global] 00:24:29.234 thread=1 00:24:29.234 invalidate=1 00:24:29.234 rw=read 00:24:29.234 time_based=1 00:24:29.234 runtime=10 00:24:29.234 ioengine=libaio 00:24:29.234 direct=1 00:24:29.234 bs=262144 00:24:29.234 iodepth=64 00:24:29.234 norandommap=1 00:24:29.234 numjobs=1 00:24:29.234 00:24:29.234 [job0] 00:24:29.234 filename=/dev/nvme0n1 00:24:29.234 [job1] 00:24:29.234 filename=/dev/nvme10n1 00:24:29.234 [job2] 00:24:29.234 filename=/dev/nvme1n1 00:24:29.234 [job3] 00:24:29.234 filename=/dev/nvme2n1 00:24:29.234 [job4] 00:24:29.234 filename=/dev/nvme3n1 00:24:29.234 [job5] 00:24:29.234 filename=/dev/nvme4n1 00:24:29.234 [job6] 00:24:29.234 filename=/dev/nvme5n1 00:24:29.234 [job7] 00:24:29.234 filename=/dev/nvme6n1 00:24:29.234 [job8] 00:24:29.234 filename=/dev/nvme7n1 00:24:29.234 [job9] 00:24:29.234 filename=/dev/nvme8n1 00:24:29.234 [job10] 00:24:29.234 filename=/dev/nvme9n1 00:24:29.234 Could not set queue depth (nvme0n1) 00:24:29.234 Could not set queue depth (nvme10n1) 00:24:29.234 Could not set queue depth (nvme1n1) 00:24:29.234 Could not set queue depth (nvme2n1) 00:24:29.234 Could not set queue depth (nvme3n1) 00:24:29.234 Could not set queue depth (nvme4n1) 00:24:29.234 Could not set queue depth (nvme5n1) 00:24:29.234 Could not set queue depth (nvme6n1) 00:24:29.234 Could not set queue depth (nvme7n1) 00:24:29.234 Could not set queue depth (nvme8n1) 00:24:29.234 Could not set queue depth (nvme9n1) 00:24:29.492 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:29.492 fio-3.35 00:24:29.492 Starting 11 threads 00:24:41.707 00:24:41.707 job0: (groupid=0, jobs=1): err= 0: pid=1574377: Sat Jul 13 07:12:09 2024 00:24:41.707 read: IOPS=825, BW=206MiB/s (216MB/s)(2079MiB/10078msec) 00:24:41.707 slat (usec): min=9, max=125215, avg=948.34, stdev=4314.31 00:24:41.707 clat (usec): min=1038, max=311866, avg=76556.69, stdev=47313.69 00:24:41.707 lat (usec): min=1067, max=311885, avg=77505.02, stdev=48006.23 00:24:41.707 clat percentiles (msec): 00:24:41.707 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 31], 00:24:41.707 | 30.00th=[ 43], 40.00th=[ 59], 50.00th=[ 74], 60.00th=[ 89], 00:24:41.707 | 70.00th=[ 100], 80.00th=[ 113], 90.00th=[ 142], 95.00th=[ 159], 00:24:41.707 | 99.00th=[ 207], 99.50th=[ 226], 99.90th=[ 259], 99.95th=[ 264], 00:24:41.707 | 99.99th=[ 313] 00:24:41.707 bw ( KiB/s): min=89088, max=466432, per=10.79%, avg=211287.55, stdev=92209.73, samples=20 00:24:41.707 iops : min= 348, max= 1822, avg=825.30, stdev=360.24, samples=20 00:24:41.707 lat (msec) : 2=0.05%, 4=1.03%, 10=2.92%, 20=4.98%, 50=26.48% 00:24:41.707 lat (msec) : 100=35.33%, 250=29.05%, 500=0.16% 00:24:41.707 cpu : usr=0.42%, sys=2.50%, ctx=1892, majf=0, minf=4097 00:24:41.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:41.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.707 issued rwts: total=8316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.707 job1: (groupid=0, jobs=1): err= 0: pid=1574378: Sat Jul 13 07:12:09 2024 00:24:41.707 read: IOPS=683, BW=171MiB/s (179MB/s)(1722MiB/10076msec) 00:24:41.707 slat (usec): min=9, max=79607, avg=1102.81, stdev=3812.91 00:24:41.707 clat (usec): min=1013, max=216943, avg=92455.19, stdev=36817.80 00:24:41.707 lat (usec): min=1036, max=231373, avg=93558.00, stdev=37349.86 00:24:41.707 clat percentiles (msec): 00:24:41.707 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 43], 20.00th=[ 65], 00:24:41.707 | 30.00th=[ 77], 40.00th=[ 87], 50.00th=[ 93], 60.00th=[ 102], 00:24:41.707 | 70.00th=[ 110], 80.00th=[ 123], 90.00th=[ 140], 95.00th=[ 153], 00:24:41.707 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 197], 99.95th=[ 197], 00:24:41.707 | 99.99th=[ 218] 00:24:41.707 bw ( KiB/s): min=118272, max=275456, per=8.92%, avg=174679.65, stdev=43882.10, samples=20 00:24:41.707 iops : min= 462, max= 1076, avg=682.30, stdev=171.44, samples=20 00:24:41.707 lat (msec) : 2=0.06%, 4=0.10%, 10=1.48%, 20=3.43%, 50=6.90% 00:24:41.707 lat (msec) : 100=46.73%, 250=41.31% 00:24:41.707 cpu : usr=0.40%, sys=1.99%, ctx=1676, majf=0, minf=4097 00:24:41.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:41.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.707 issued rwts: total=6887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.707 job2: (groupid=0, jobs=1): err= 0: pid=1574379: Sat Jul 13 07:12:09 2024 00:24:41.707 read: IOPS=787, BW=197MiB/s (207MB/s)(1987MiB/10089msec) 00:24:41.707 slat (usec): min=14, max=117388, avg=1145.84, stdev=4344.65 00:24:41.707 clat (msec): min=3, max=277, avg=80.02, stdev=48.47 00:24:41.707 lat (msec): min=3, max=286, avg=81.17, stdev=49.18 00:24:41.707 clat percentiles (msec): 00:24:41.707 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 42], 00:24:41.707 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 64], 60.00th=[ 79], 00:24:41.707 | 70.00th=[ 100], 80.00th=[ 113], 90.00th=[ 146], 95.00th=[ 184], 00:24:41.707 | 99.00th=[ 230], 99.50th=[ 245], 99.90th=[ 264], 99.95th=[ 275], 00:24:41.707 | 99.99th=[ 279] 00:24:41.707 bw ( KiB/s): min=77312, max=488495, per=10.31%, avg=201850.75, stdev=97849.03, samples=20 00:24:41.707 iops : min= 302, max= 1908, avg=788.45, stdev=382.20, samples=20 00:24:41.707 lat (msec) : 4=0.06%, 10=0.43%, 20=1.74%, 50=28.73%, 100=39.94% 00:24:41.707 lat (msec) : 250=28.71%, 500=0.39% 00:24:41.707 cpu : usr=0.50%, sys=2.30%, ctx=1623, majf=0, minf=4097 00:24:41.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:41.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.707 issued rwts: total=7949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.707 job3: (groupid=0, jobs=1): err= 0: pid=1574380: Sat Jul 13 07:12:09 2024 00:24:41.707 read: IOPS=596, BW=149MiB/s (156MB/s)(1502MiB/10078msec) 00:24:41.707 slat (usec): min=10, max=69468, avg=1352.38, stdev=4330.26 00:24:41.707 clat (msec): min=5, max=283, avg=105.93, stdev=46.68 00:24:41.707 lat (msec): min=5, max=283, avg=107.28, stdev=47.50 00:24:41.707 clat percentiles (msec): 00:24:41.707 | 1.00th=[ 17], 5.00th=[ 45], 10.00th=[ 58], 20.00th=[ 70], 00:24:41.707 | 30.00th=[ 80], 40.00th=[ 87], 50.00th=[ 97], 60.00th=[ 109], 00:24:41.707 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 180], 95.00th=[ 199], 00:24:41.707 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 271], 99.95th=[ 275], 00:24:41.707 | 99.99th=[ 284] 00:24:41.707 bw ( KiB/s): min=77824, max=253440, per=7.77%, avg=152147.00, stdev=51700.09, samples=20 00:24:41.707 iops : min= 304, max= 990, avg=594.30, stdev=201.93, samples=20 00:24:41.707 lat (msec) : 10=0.22%, 20=1.37%, 50=4.61%, 100=46.88%, 250=46.20% 00:24:41.707 lat (msec) : 500=0.73% 00:24:41.707 cpu : usr=0.36%, sys=1.95%, ctx=1437, majf=0, minf=3721 00:24:41.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:41.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.707 issued rwts: total=6007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.707 job4: (groupid=0, jobs=1): err= 0: pid=1574381: Sat Jul 13 07:12:09 2024 00:24:41.707 read: IOPS=564, BW=141MiB/s (148MB/s)(1425MiB/10089msec) 00:24:41.707 slat (usec): min=12, max=101486, avg=1457.70, stdev=5148.05 00:24:41.707 clat (usec): min=879, max=303471, avg=111762.37, stdev=53146.12 00:24:41.707 lat (usec): min=907, max=315163, avg=113220.08, stdev=54094.30 00:24:41.707 clat percentiles (msec): 00:24:41.707 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 50], 20.00th=[ 62], 00:24:41.707 | 30.00th=[ 81], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 116], 00:24:41.707 | 70.00th=[ 134], 80.00th=[ 157], 90.00th=[ 190], 95.00th=[ 211], 00:24:41.707 | 99.00th=[ 249], 99.50th=[ 259], 99.90th=[ 279], 99.95th=[ 292], 00:24:41.707 | 99.99th=[ 305] 00:24:41.707 bw ( KiB/s): min=79360, max=276992, per=7.37%, avg=144245.75, stdev=57157.64, samples=20 00:24:41.707 iops : min= 310, max= 1082, avg=563.45, stdev=223.28, samples=20 00:24:41.707 lat (usec) : 1000=0.04% 00:24:41.707 lat (msec) : 10=0.37%, 20=1.12%, 50=8.88%, 100=33.29%, 250=55.37% 00:24:41.707 lat (msec) : 500=0.93% 00:24:41.707 cpu : usr=0.30%, sys=1.82%, ctx=1370, majf=0, minf=4097 00:24:41.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:41.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.707 issued rwts: total=5698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.707 job5: (groupid=0, jobs=1): err= 0: pid=1574382: Sat Jul 13 07:12:09 2024 00:24:41.707 read: IOPS=699, BW=175MiB/s (183MB/s)(1762MiB/10078msec) 00:24:41.707 slat (usec): min=11, max=202003, avg=1147.36, stdev=4169.32 00:24:41.707 clat (msec): min=3, max=284, avg=90.29, stdev=40.03 00:24:41.707 lat (msec): min=3, max=284, avg=91.44, stdev=40.36 00:24:41.707 clat percentiles (msec): 00:24:41.707 | 1.00th=[ 12], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 61], 00:24:41.707 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 89], 60.00th=[ 97], 00:24:41.707 | 70.00th=[ 105], 80.00th=[ 116], 90.00th=[ 132], 95.00th=[ 144], 00:24:41.707 | 99.00th=[ 257], 99.50th=[ 268], 99.90th=[ 275], 99.95th=[ 279], 00:24:41.707 | 99.99th=[ 284] 00:24:41.707 bw ( KiB/s): min=113152, max=378368, per=9.13%, avg=178797.80, stdev=56780.95, samples=20 00:24:41.707 iops : min= 442, max= 1478, avg=698.40, stdev=221.80, samples=20 00:24:41.707 lat (msec) : 4=0.03%, 10=0.72%, 20=1.45%, 50=11.71%, 100=50.48% 00:24:41.707 lat (msec) : 250=34.44%, 500=1.18% 00:24:41.707 cpu : usr=0.31%, sys=2.22%, ctx=1589, majf=0, minf=4097 00:24:41.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:41.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.707 issued rwts: total=7048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.707 job6: (groupid=0, jobs=1): err= 0: pid=1574383: Sat Jul 13 07:12:09 2024 00:24:41.707 read: IOPS=665, BW=166MiB/s (175MB/s)(1680MiB/10088msec) 00:24:41.707 slat (usec): min=10, max=130105, avg=1343.76, stdev=5473.19 00:24:41.707 clat (msec): min=12, max=319, avg=94.69, stdev=52.33 00:24:41.707 lat (msec): min=13, max=381, avg=96.04, stdev=53.18 00:24:41.707 clat percentiles (msec): 00:24:41.707 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 52], 00:24:41.707 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 80], 60.00th=[ 97], 00:24:41.707 | 70.00th=[ 113], 80.00th=[ 134], 90.00th=[ 180], 95.00th=[ 197], 00:24:41.707 | 99.00th=[ 245], 99.50th=[ 253], 99.90th=[ 266], 99.95th=[ 268], 00:24:41.707 | 99.99th=[ 321] 00:24:41.707 bw ( KiB/s): min=84480, max=321024, per=8.70%, avg=170350.55, stdev=73078.28, samples=20 00:24:41.707 iops : min= 330, max= 1254, avg=665.40, stdev=285.40, samples=20 00:24:41.707 lat (msec) : 20=0.48%, 50=18.03%, 100=43.12%, 250=37.73%, 500=0.64% 00:24:41.707 cpu : usr=0.40%, sys=2.19%, ctx=1439, majf=0, minf=4097 00:24:41.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:41.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.707 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.707 job7: (groupid=0, jobs=1): err= 0: pid=1574384: Sat Jul 13 07:12:09 2024 00:24:41.707 read: IOPS=657, BW=164MiB/s (172MB/s)(1656MiB/10078msec) 00:24:41.707 slat (usec): min=13, max=103977, avg=1308.42, stdev=4280.06 00:24:41.707 clat (msec): min=10, max=252, avg=95.97, stdev=37.94 00:24:41.707 lat (msec): min=11, max=297, avg=97.27, stdev=38.49 00:24:41.707 clat percentiles (msec): 00:24:41.707 | 1.00th=[ 24], 5.00th=[ 46], 10.00th=[ 59], 20.00th=[ 69], 00:24:41.707 | 30.00th=[ 75], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 95], 00:24:41.707 | 70.00th=[ 105], 80.00th=[ 122], 90.00th=[ 150], 95.00th=[ 174], 00:24:41.707 | 99.00th=[ 209], 99.50th=[ 232], 99.90th=[ 253], 99.95th=[ 253], 00:24:41.707 | 99.99th=[ 253] 00:24:41.707 bw ( KiB/s): min=89600, max=253440, per=8.58%, avg=167976.40, stdev=43542.78, samples=20 00:24:41.707 iops : min= 350, max= 990, avg=656.15, stdev=170.10, samples=20 00:24:41.707 lat (msec) : 20=0.65%, 50=5.86%, 100=59.06%, 250=34.26%, 500=0.17% 00:24:41.707 cpu : usr=0.41%, sys=2.22%, ctx=1501, majf=0, minf=4097 00:24:41.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:41.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.707 issued rwts: total=6625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.708 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.708 job8: (groupid=0, jobs=1): err= 0: pid=1574385: Sat Jul 13 07:12:09 2024 00:24:41.708 read: IOPS=669, BW=167MiB/s (176MB/s)(1689MiB/10091msec) 00:24:41.708 slat (usec): min=10, max=141834, avg=1225.82, stdev=4718.94 00:24:41.708 clat (msec): min=2, max=295, avg=94.28, stdev=48.58 00:24:41.708 lat (msec): min=2, max=305, avg=95.50, stdev=49.33 00:24:41.708 clat percentiles (msec): 00:24:41.708 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 54], 00:24:41.708 | 30.00th=[ 67], 40.00th=[ 79], 50.00th=[ 90], 60.00th=[ 104], 00:24:41.708 | 70.00th=[ 115], 80.00th=[ 130], 90.00th=[ 159], 95.00th=[ 186], 00:24:41.708 | 99.00th=[ 234], 99.50th=[ 257], 99.90th=[ 288], 99.95th=[ 288], 00:24:41.708 | 99.99th=[ 296] 00:24:41.708 bw ( KiB/s): min=88064, max=364032, per=8.75%, avg=171340.30, stdev=64338.93, samples=20 00:24:41.708 iops : min= 344, max= 1422, avg=669.25, stdev=251.25, samples=20 00:24:41.708 lat (msec) : 4=0.50%, 10=0.41%, 20=1.84%, 50=15.17%, 100=39.84% 00:24:41.708 lat (msec) : 250=41.57%, 500=0.67% 00:24:41.708 cpu : usr=0.48%, sys=2.05%, ctx=1554, majf=0, minf=4097 00:24:41.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:41.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.708 issued rwts: total=6757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.708 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.708 job9: (groupid=0, jobs=1): err= 0: pid=1574386: Sat Jul 13 07:12:09 2024 00:24:41.708 read: IOPS=795, BW=199MiB/s (208MB/s)(2004MiB/10079msec) 00:24:41.708 slat (usec): min=12, max=35225, avg=1131.46, stdev=3086.95 00:24:41.708 clat (msec): min=12, max=191, avg=79.29, stdev=32.09 00:24:41.708 lat (msec): min=12, max=191, avg=80.42, stdev=32.44 00:24:41.708 clat percentiles (msec): 00:24:41.708 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 52], 00:24:41.708 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 75], 60.00th=[ 84], 00:24:41.708 | 70.00th=[ 95], 80.00th=[ 107], 90.00th=[ 126], 95.00th=[ 142], 00:24:41.708 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 178], 00:24:41.708 | 99.99th=[ 192] 00:24:41.708 bw ( KiB/s): min=103728, max=376320, per=10.39%, avg=203535.20, stdev=73973.41, samples=20 00:24:41.708 iops : min= 405, max= 1470, avg=795.05, stdev=288.97, samples=20 00:24:41.708 lat (msec) : 20=0.07%, 50=18.92%, 100=55.83%, 250=25.18% 00:24:41.708 cpu : usr=0.53%, sys=2.54%, ctx=1652, majf=0, minf=4097 00:24:41.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:41.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.708 issued rwts: total=8014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.708 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.708 job10: (groupid=0, jobs=1): err= 0: pid=1574387: Sat Jul 13 07:12:09 2024 00:24:41.708 read: IOPS=710, BW=178MiB/s (186MB/s)(1791MiB/10084msec) 00:24:41.708 slat (usec): min=9, max=126919, avg=938.09, stdev=4063.62 00:24:41.708 clat (msec): min=2, max=270, avg=89.06, stdev=49.46 00:24:41.708 lat (msec): min=2, max=292, avg=89.99, stdev=49.76 00:24:41.708 clat percentiles (msec): 00:24:41.708 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 29], 20.00th=[ 47], 00:24:41.708 | 30.00th=[ 65], 40.00th=[ 77], 50.00th=[ 86], 60.00th=[ 95], 00:24:41.708 | 70.00th=[ 106], 80.00th=[ 120], 90.00th=[ 146], 95.00th=[ 197], 00:24:41.708 | 99.00th=[ 236], 99.50th=[ 245], 99.90th=[ 262], 99.95th=[ 264], 00:24:41.708 | 99.99th=[ 271] 00:24:41.708 bw ( KiB/s): min=118272, max=335872, per=9.28%, avg=181793.70, stdev=55283.79, samples=20 00:24:41.708 iops : min= 462, max= 1312, avg=710.10, stdev=215.96, samples=20 00:24:41.708 lat (msec) : 4=0.27%, 10=1.86%, 20=4.35%, 50=14.49%, 100=43.53% 00:24:41.708 lat (msec) : 250=35.13%, 500=0.38% 00:24:41.708 cpu : usr=0.38%, sys=2.24%, ctx=1697, majf=0, minf=4097 00:24:41.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:41.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:41.708 issued rwts: total=7165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.708 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:41.708 00:24:41.708 Run status group 0 (all jobs): 00:24:41.708 READ: bw=1912MiB/s (2005MB/s), 141MiB/s-206MiB/s (148MB/s-216MB/s), io=18.8GiB (20.2GB), run=10076-10091msec 00:24:41.708 00:24:41.708 Disk stats (read/write): 00:24:41.708 nvme0n1: ios=16428/0, merge=0/0, ticks=1239360/0, in_queue=1239360, util=97.22% 00:24:41.708 nvme10n1: ios=13524/0, merge=0/0, ticks=1239148/0, in_queue=1239148, util=97.45% 00:24:41.708 nvme1n1: ios=15705/0, merge=0/0, ticks=1233996/0, in_queue=1233996, util=97.70% 00:24:41.708 nvme2n1: ios=11791/0, merge=0/0, ticks=1238559/0, in_queue=1238559, util=97.86% 00:24:41.708 nvme3n1: ios=11227/0, merge=0/0, ticks=1234930/0, in_queue=1234930, util=97.92% 00:24:41.708 nvme4n1: ios=13897/0, merge=0/0, ticks=1239540/0, in_queue=1239540, util=98.28% 00:24:41.708 nvme5n1: ios=13256/0, merge=0/0, ticks=1234541/0, in_queue=1234541, util=98.40% 00:24:41.708 nvme6n1: ios=13039/0, merge=0/0, ticks=1234509/0, in_queue=1234509, util=98.53% 00:24:41.708 nvme7n1: ios=13326/0, merge=0/0, ticks=1235351/0, in_queue=1235351, util=98.92% 00:24:41.708 nvme8n1: ios=15806/0, merge=0/0, ticks=1235512/0, in_queue=1235512, util=99.09% 00:24:41.708 nvme9n1: ios=14037/0, merge=0/0, ticks=1245006/0, in_queue=1245006, util=99.20% 00:24:41.708 07:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:41.708 [global] 00:24:41.708 thread=1 00:24:41.708 invalidate=1 00:24:41.708 rw=randwrite 00:24:41.708 time_based=1 00:24:41.708 runtime=10 00:24:41.708 ioengine=libaio 00:24:41.708 direct=1 00:24:41.708 bs=262144 00:24:41.708 iodepth=64 00:24:41.708 norandommap=1 00:24:41.708 numjobs=1 00:24:41.708 00:24:41.708 [job0] 00:24:41.708 filename=/dev/nvme0n1 00:24:41.708 [job1] 00:24:41.708 filename=/dev/nvme10n1 00:24:41.708 [job2] 00:24:41.708 filename=/dev/nvme1n1 00:24:41.708 [job3] 00:24:41.708 filename=/dev/nvme2n1 00:24:41.708 [job4] 00:24:41.708 filename=/dev/nvme3n1 00:24:41.708 [job5] 00:24:41.708 filename=/dev/nvme4n1 00:24:41.708 [job6] 00:24:41.708 filename=/dev/nvme5n1 00:24:41.708 [job7] 00:24:41.708 filename=/dev/nvme6n1 00:24:41.708 [job8] 00:24:41.708 filename=/dev/nvme7n1 00:24:41.708 [job9] 00:24:41.708 filename=/dev/nvme8n1 00:24:41.708 [job10] 00:24:41.708 filename=/dev/nvme9n1 00:24:41.708 Could not set queue depth (nvme0n1) 00:24:41.708 Could not set queue depth (nvme10n1) 00:24:41.708 Could not set queue depth (nvme1n1) 00:24:41.708 Could not set queue depth (nvme2n1) 00:24:41.708 Could not set queue depth (nvme3n1) 00:24:41.708 Could not set queue depth (nvme4n1) 00:24:41.708 Could not set queue depth (nvme5n1) 00:24:41.708 Could not set queue depth (nvme6n1) 00:24:41.708 Could not set queue depth (nvme7n1) 00:24:41.708 Could not set queue depth (nvme8n1) 00:24:41.708 Could not set queue depth (nvme9n1) 00:24:41.708 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:41.708 fio-3.35 00:24:41.708 Starting 11 threads 00:24:51.674 00:24:51.674 job0: (groupid=0, jobs=1): err= 0: pid=1576139: Sat Jul 13 07:12:20 2024 00:24:51.674 write: IOPS=454, BW=114MiB/s (119MB/s)(1160MiB/10195msec); 0 zone resets 00:24:51.674 slat (usec): min=17, max=86890, avg=1764.55, stdev=4967.77 00:24:51.674 clat (usec): min=1299, max=438950, avg=138835.75, stdev=90598.88 00:24:51.674 lat (usec): min=1548, max=438980, avg=140600.30, stdev=91890.68 00:24:51.674 clat percentiles (msec): 00:24:51.674 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 25], 20.00th=[ 54], 00:24:51.674 | 30.00th=[ 85], 40.00th=[ 99], 50.00th=[ 122], 60.00th=[ 146], 00:24:51.674 | 70.00th=[ 190], 80.00th=[ 218], 90.00th=[ 271], 95.00th=[ 313], 00:24:51.674 | 99.00th=[ 368], 99.50th=[ 376], 99.90th=[ 426], 99.95th=[ 426], 00:24:51.674 | 99.99th=[ 439] 00:24:51.674 bw ( KiB/s): min=45056, max=254464, per=8.61%, avg=117120.00, stdev=64641.87, samples=20 00:24:51.674 iops : min= 176, max= 994, avg=457.50, stdev=252.51, samples=20 00:24:51.674 lat (msec) : 2=0.15%, 4=0.71%, 10=2.44%, 20=4.96%, 50=10.07% 00:24:51.674 lat (msec) : 100=22.70%, 250=46.92%, 500=12.05% 00:24:51.674 cpu : usr=1.16%, sys=1.53%, ctx=2389, majf=0, minf=1 00:24:51.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:51.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.674 issued rwts: total=0,4638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.674 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.674 job1: (groupid=0, jobs=1): err= 0: pid=1576173: Sat Jul 13 07:12:20 2024 00:24:51.674 write: IOPS=600, BW=150MiB/s (157MB/s)(1508MiB/10043msec); 0 zone resets 00:24:51.674 slat (usec): min=16, max=64954, avg=1147.65, stdev=3310.48 00:24:51.674 clat (usec): min=1262, max=399992, avg=105413.76, stdev=68548.66 00:24:51.674 lat (usec): min=1333, max=400034, avg=106561.41, stdev=69279.20 00:24:51.674 clat percentiles (msec): 00:24:51.674 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 40], 20.00th=[ 45], 00:24:51.674 | 30.00th=[ 57], 40.00th=[ 75], 50.00th=[ 88], 60.00th=[ 107], 00:24:51.674 | 70.00th=[ 136], 80.00th=[ 159], 90.00th=[ 197], 95.00th=[ 228], 00:24:51.674 | 99.00th=[ 317], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 401], 00:24:51.674 | 99.99th=[ 401] 00:24:51.674 bw ( KiB/s): min=59392, max=315904, per=11.23%, avg=152755.20, stdev=59236.68, samples=20 00:24:51.674 iops : min= 232, max= 1234, avg=596.70, stdev=231.39, samples=20 00:24:51.674 lat (msec) : 2=0.12%, 4=0.27%, 10=0.85%, 20=2.99%, 50=20.98% 00:24:51.674 lat (msec) : 100=31.63%, 250=39.55%, 500=3.63% 00:24:51.675 cpu : usr=1.73%, sys=1.84%, ctx=3088, majf=0, minf=1 00:24:51.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:51.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.675 issued rwts: total=0,6030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.675 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.675 job2: (groupid=0, jobs=1): err= 0: pid=1576188: Sat Jul 13 07:12:20 2024 00:24:51.675 write: IOPS=376, BW=94.1MiB/s (98.7MB/s)(960MiB/10197msec); 0 zone resets 00:24:51.675 slat (usec): min=19, max=76128, avg=2343.83, stdev=5444.68 00:24:51.675 clat (usec): min=1543, max=417983, avg=167551.93, stdev=79897.25 00:24:51.675 lat (msec): min=2, max=418, avg=169.90, stdev=81.02 00:24:51.675 clat percentiles (msec): 00:24:51.675 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 58], 20.00th=[ 100], 00:24:51.675 | 30.00th=[ 136], 40.00th=[ 150], 50.00th=[ 163], 60.00th=[ 182], 00:24:51.675 | 70.00th=[ 201], 80.00th=[ 230], 90.00th=[ 292], 95.00th=[ 309], 00:24:51.675 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 405], 99.95th=[ 418], 00:24:51.675 | 99.99th=[ 418] 00:24:51.675 bw ( KiB/s): min=51200, max=189440, per=7.11%, avg=96665.60, stdev=37747.39, samples=20 00:24:51.675 iops : min= 200, max= 740, avg=377.60, stdev=147.45, samples=20 00:24:51.675 lat (msec) : 2=0.08%, 4=0.42%, 10=1.15%, 20=2.42%, 50=4.51% 00:24:51.675 lat (msec) : 100=11.70%, 250=65.02%, 500=14.72% 00:24:51.675 cpu : usr=1.15%, sys=1.19%, ctx=1587, majf=0, minf=1 00:24:51.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:51.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.675 issued rwts: total=0,3839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.675 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.675 job3: (groupid=0, jobs=1): err= 0: pid=1576189: Sat Jul 13 07:12:20 2024 00:24:51.675 write: IOPS=548, BW=137MiB/s (144MB/s)(1398MiB/10197msec); 0 zone resets 00:24:51.675 slat (usec): min=17, max=68257, avg=1263.20, stdev=4054.02 00:24:51.675 clat (usec): min=1818, max=409029, avg=115360.85, stdev=86371.57 00:24:51.675 lat (usec): min=1905, max=409081, avg=116624.05, stdev=87452.74 00:24:51.675 clat percentiles (msec): 00:24:51.675 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 31], 20.00th=[ 44], 00:24:51.675 | 30.00th=[ 47], 40.00th=[ 55], 50.00th=[ 81], 60.00th=[ 128], 00:24:51.675 | 70.00th=[ 167], 80.00th=[ 203], 90.00th=[ 243], 95.00th=[ 271], 00:24:51.675 | 99.00th=[ 347], 99.50th=[ 376], 99.90th=[ 401], 99.95th=[ 405], 00:24:51.675 | 99.99th=[ 409] 00:24:51.675 bw ( KiB/s): min=61440, max=381440, per=10.41%, avg=141542.40, stdev=93632.21, samples=20 00:24:51.675 iops : min= 240, max= 1490, avg=552.90, stdev=365.75, samples=20 00:24:51.675 lat (msec) : 2=0.02%, 4=0.38%, 10=1.93%, 20=3.27%, 50=31.59% 00:24:51.675 lat (msec) : 100=18.83%, 250=35.47%, 500=8.51% 00:24:51.675 cpu : usr=1.68%, sys=1.93%, ctx=3140, majf=0, minf=1 00:24:51.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:51.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.675 issued rwts: total=0,5593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.675 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.675 job4: (groupid=0, jobs=1): err= 0: pid=1576191: Sat Jul 13 07:12:20 2024 00:24:51.675 write: IOPS=502, BW=126MiB/s (132MB/s)(1272MiB/10115msec); 0 zone resets 00:24:51.675 slat (usec): min=24, max=70601, avg=1789.78, stdev=4239.44 00:24:51.675 clat (msec): min=3, max=371, avg=125.41, stdev=65.03 00:24:51.675 lat (msec): min=3, max=371, avg=127.20, stdev=65.92 00:24:51.675 clat percentiles (msec): 00:24:51.675 | 1.00th=[ 15], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 72], 00:24:51.675 | 30.00th=[ 87], 40.00th=[ 103], 50.00th=[ 116], 60.00th=[ 129], 00:24:51.675 | 70.00th=[ 140], 80.00th=[ 174], 90.00th=[ 224], 95.00th=[ 257], 00:24:51.675 | 99.00th=[ 305], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 372], 00:24:51.675 | 99.99th=[ 372] 00:24:51.675 bw ( KiB/s): min=56832, max=274944, per=9.45%, avg=128588.80, stdev=51729.28, samples=20 00:24:51.675 iops : min= 222, max= 1074, avg=502.30, stdev=202.07, samples=20 00:24:51.675 lat (msec) : 4=0.02%, 10=0.49%, 20=1.26%, 50=6.67%, 100=29.79% 00:24:51.675 lat (msec) : 250=55.98%, 500=5.80% 00:24:51.675 cpu : usr=1.60%, sys=1.54%, ctx=1870, majf=0, minf=1 00:24:51.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:51.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.675 issued rwts: total=0,5086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.675 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.675 job5: (groupid=0, jobs=1): err= 0: pid=1576193: Sat Jul 13 07:12:20 2024 00:24:51.675 write: IOPS=399, BW=99.8MiB/s (105MB/s)(1017MiB/10197msec); 0 zone resets 00:24:51.675 slat (usec): min=15, max=119368, avg=1927.34, stdev=4845.86 00:24:51.675 clat (msec): min=2, max=379, avg=158.36, stdev=73.51 00:24:51.675 lat (msec): min=2, max=379, avg=160.29, stdev=74.29 00:24:51.675 clat percentiles (msec): 00:24:51.675 | 1.00th=[ 10], 5.00th=[ 29], 10.00th=[ 51], 20.00th=[ 100], 00:24:51.675 | 30.00th=[ 120], 40.00th=[ 144], 50.00th=[ 159], 60.00th=[ 178], 00:24:51.675 | 70.00th=[ 197], 80.00th=[ 226], 90.00th=[ 249], 95.00th=[ 275], 00:24:51.675 | 99.00th=[ 355], 99.50th=[ 368], 99.90th=[ 376], 99.95th=[ 380], 00:24:51.675 | 99.99th=[ 380] 00:24:51.675 bw ( KiB/s): min=54784, max=169472, per=7.54%, avg=102528.00, stdev=32261.29, samples=20 00:24:51.675 iops : min= 214, max= 662, avg=400.50, stdev=126.02, samples=20 00:24:51.675 lat (msec) : 4=0.05%, 10=0.98%, 20=1.79%, 50=7.05%, 100=10.59% 00:24:51.675 lat (msec) : 250=69.94%, 500=9.58% 00:24:51.675 cpu : usr=1.28%, sys=1.16%, ctx=2017, majf=0, minf=1 00:24:51.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:51.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.675 issued rwts: total=0,4069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.675 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.675 job6: (groupid=0, jobs=1): err= 0: pid=1576194: Sat Jul 13 07:12:20 2024 00:24:51.675 write: IOPS=465, BW=116MiB/s (122MB/s)(1178MiB/10113msec); 0 zone resets 00:24:51.675 slat (usec): min=12, max=102874, avg=1665.63, stdev=4592.67 00:24:51.675 clat (usec): min=1588, max=373250, avg=135694.38, stdev=72467.53 00:24:51.675 lat (usec): min=1734, max=373325, avg=137360.01, stdev=73444.30 00:24:51.675 clat percentiles (msec): 00:24:51.675 | 1.00th=[ 7], 5.00th=[ 25], 10.00th=[ 43], 20.00th=[ 78], 00:24:51.675 | 30.00th=[ 106], 40.00th=[ 115], 50.00th=[ 129], 60.00th=[ 144], 00:24:51.675 | 70.00th=[ 157], 80.00th=[ 178], 90.00th=[ 251], 95.00th=[ 284], 00:24:51.675 | 99.00th=[ 317], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 347], 00:24:51.675 | 99.99th=[ 372] 00:24:51.675 bw ( KiB/s): min=59392, max=194560, per=8.75%, avg=118963.20, stdev=38235.73, samples=20 00:24:51.675 iops : min= 232, max= 760, avg=464.70, stdev=149.36, samples=20 00:24:51.675 lat (msec) : 2=0.06%, 4=0.13%, 10=1.25%, 20=1.59%, 50=9.49% 00:24:51.675 lat (msec) : 100=14.29%, 250=62.95%, 500=10.23% 00:24:51.675 cpu : usr=1.38%, sys=1.39%, ctx=2354, majf=0, minf=1 00:24:51.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:51.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.675 issued rwts: total=0,4710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.675 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.675 job7: (groupid=0, jobs=1): err= 0: pid=1576195: Sat Jul 13 07:12:20 2024 00:24:51.675 write: IOPS=399, BW=99.9MiB/s (105MB/s)(1011MiB/10111msec); 0 zone resets 00:24:51.675 slat (usec): min=18, max=69674, avg=1888.34, stdev=4945.44 00:24:51.675 clat (msec): min=2, max=326, avg=158.14, stdev=78.82 00:24:51.675 lat (msec): min=2, max=326, avg=160.02, stdev=79.98 00:24:51.675 clat percentiles (msec): 00:24:51.675 | 1.00th=[ 8], 5.00th=[ 22], 10.00th=[ 39], 20.00th=[ 78], 00:24:51.675 | 30.00th=[ 124], 40.00th=[ 142], 50.00th=[ 165], 60.00th=[ 184], 00:24:51.675 | 70.00th=[ 207], 80.00th=[ 234], 90.00th=[ 259], 95.00th=[ 284], 00:24:51.675 | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 326], 99.95th=[ 326], 00:24:51.675 | 99.99th=[ 326] 00:24:51.675 bw ( KiB/s): min=55296, max=200192, per=7.49%, avg=101862.40, stdev=42960.33, samples=20 00:24:51.675 iops : min= 216, max= 782, avg=397.90, stdev=167.81, samples=20 00:24:51.675 lat (msec) : 4=0.20%, 10=1.53%, 20=2.80%, 50=8.49%, 100=12.02% 00:24:51.675 lat (msec) : 250=62.49%, 500=12.47% 00:24:51.675 cpu : usr=1.14%, sys=1.22%, ctx=2155, majf=0, minf=1 00:24:51.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:51.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.675 issued rwts: total=0,4042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.675 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.675 job8: (groupid=0, jobs=1): err= 0: pid=1576198: Sat Jul 13 07:12:20 2024 00:24:51.675 write: IOPS=442, BW=111MiB/s (116MB/s)(1128MiB/10194msec); 0 zone resets 00:24:51.675 slat (usec): min=18, max=147606, avg=1891.76, stdev=5586.02 00:24:51.675 clat (usec): min=1890, max=428157, avg=142663.59, stdev=78527.50 00:24:51.675 lat (usec): min=1933, max=428190, avg=144555.35, stdev=79568.62 00:24:51.675 clat percentiles (msec): 00:24:51.675 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 40], 20.00th=[ 57], 00:24:51.675 | 30.00th=[ 84], 40.00th=[ 120], 50.00th=[ 157], 60.00th=[ 176], 00:24:51.675 | 70.00th=[ 188], 80.00th=[ 211], 90.00th=[ 239], 95.00th=[ 268], 00:24:51.675 | 99.00th=[ 305], 99.50th=[ 342], 99.90th=[ 414], 99.95th=[ 414], 00:24:51.675 | 99.99th=[ 430] 00:24:51.675 bw ( KiB/s): min=61440, max=246784, per=8.37%, avg=113843.20, stdev=50794.26, samples=20 00:24:51.675 iops : min= 240, max= 964, avg=444.70, stdev=198.42, samples=20 00:24:51.675 lat (msec) : 2=0.02%, 4=0.24%, 10=1.82%, 20=3.04%, 50=13.02% 00:24:51.675 lat (msec) : 100=16.76%, 250=57.54%, 500=7.56% 00:24:51.675 cpu : usr=1.07%, sys=1.57%, ctx=2084, majf=0, minf=1 00:24:51.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:51.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.675 issued rwts: total=0,4510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.675 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.675 job9: (groupid=0, jobs=1): err= 0: pid=1576199: Sat Jul 13 07:12:20 2024 00:24:51.675 write: IOPS=604, BW=151MiB/s (158MB/s)(1517MiB/10047msec); 0 zone resets 00:24:51.675 slat (usec): min=14, max=163630, avg=968.94, stdev=4916.82 00:24:51.675 clat (usec): min=1498, max=461019, avg=104900.97, stdev=73087.13 00:24:51.675 lat (usec): min=1572, max=461108, avg=105869.91, stdev=73681.63 00:24:51.675 clat percentiles (msec): 00:24:51.675 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 42], 00:24:51.675 | 30.00th=[ 59], 40.00th=[ 74], 50.00th=[ 97], 60.00th=[ 112], 00:24:51.676 | 70.00th=[ 131], 80.00th=[ 161], 90.00th=[ 197], 95.00th=[ 262], 00:24:51.676 | 99.00th=[ 309], 99.50th=[ 321], 99.90th=[ 456], 99.95th=[ 456], 00:24:51.676 | 99.99th=[ 460] 00:24:51.676 bw ( KiB/s): min=64512, max=247808, per=11.30%, avg=153753.60, stdev=47443.75, samples=20 00:24:51.676 iops : min= 252, max= 968, avg=600.60, stdev=185.33, samples=20 00:24:51.676 lat (msec) : 2=0.08%, 4=0.56%, 10=3.00%, 20=6.31%, 50=14.62% 00:24:51.676 lat (msec) : 100=27.14%, 250=42.63%, 500=5.67% 00:24:51.676 cpu : usr=1.75%, sys=1.77%, ctx=3811, majf=0, minf=1 00:24:51.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:51.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.676 issued rwts: total=0,6069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.676 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.676 job10: (groupid=0, jobs=1): err= 0: pid=1576201: Sat Jul 13 07:12:20 2024 00:24:51.676 write: IOPS=548, BW=137MiB/s (144MB/s)(1399MiB/10198msec); 0 zone resets 00:24:51.676 slat (usec): min=14, max=102314, avg=1195.80, stdev=3867.71 00:24:51.676 clat (usec): min=1251, max=453764, avg=115237.00, stdev=76513.07 00:24:51.676 lat (usec): min=1283, max=453793, avg=116432.79, stdev=77498.86 00:24:51.676 clat percentiles (msec): 00:24:51.676 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 33], 20.00th=[ 42], 00:24:51.676 | 30.00th=[ 55], 40.00th=[ 79], 50.00th=[ 110], 60.00th=[ 123], 00:24:51.676 | 70.00th=[ 148], 80.00th=[ 186], 90.00th=[ 230], 95.00th=[ 249], 00:24:51.676 | 99.00th=[ 313], 99.50th=[ 334], 99.90th=[ 439], 99.95th=[ 439], 00:24:51.676 | 99.99th=[ 456] 00:24:51.676 bw ( KiB/s): min=65536, max=282624, per=10.41%, avg=141644.80, stdev=60188.87, samples=20 00:24:51.676 iops : min= 256, max= 1104, avg=553.30, stdev=235.11, samples=20 00:24:51.676 lat (msec) : 2=0.05%, 4=0.41%, 10=1.57%, 20=2.98%, 50=22.39% 00:24:51.676 lat (msec) : 100=18.42%, 250=49.26%, 500=4.91% 00:24:51.676 cpu : usr=1.40%, sys=1.73%, ctx=3324, majf=0, minf=1 00:24:51.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:51.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:51.676 issued rwts: total=0,5597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.676 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:51.676 00:24:51.676 Run status group 0 (all jobs): 00:24:51.676 WRITE: bw=1328MiB/s (1393MB/s), 94.1MiB/s-151MiB/s (98.7MB/s-158MB/s), io=13.2GiB (14.2GB), run=10043-10198msec 00:24:51.676 00:24:51.676 Disk stats (read/write): 00:24:51.676 nvme0n1: ios=49/9249, merge=0/0, ticks=86/1237892, in_queue=1237978, util=97.61% 00:24:51.676 nvme10n1: ios=48/11654, merge=0/0, ticks=95/1222663, in_queue=1222758, util=97.75% 00:24:51.676 nvme1n1: ios=21/7648, merge=0/0, ticks=66/1234795, in_queue=1234861, util=97.73% 00:24:51.676 nvme2n1: ios=26/11140, merge=0/0, ticks=78/1242571, in_queue=1242649, util=98.07% 00:24:51.676 nvme3n1: ios=44/9958, merge=0/0, ticks=1287/1206576, in_queue=1207863, util=99.96% 00:24:51.676 nvme4n1: ios=0/8109, merge=0/0, ticks=0/1240572, in_queue=1240572, util=98.11% 00:24:51.676 nvme5n1: ios=0/9209, merge=0/0, ticks=0/1212872, in_queue=1212872, util=98.23% 00:24:51.676 nvme6n1: ios=0/7870, merge=0/0, ticks=0/1214671, in_queue=1214671, util=98.31% 00:24:51.676 nvme7n1: ios=44/8994, merge=0/0, ticks=4905/1208959, in_queue=1213864, util=99.93% 00:24:51.676 nvme8n1: ios=40/11769, merge=0/0, ticks=3613/1191731, in_queue=1195344, util=99.88% 00:24:51.676 nvme9n1: ios=42/11163, merge=0/0, ticks=1473/1244653, in_queue=1246126, util=99.96% 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:51.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:51.676 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.676 07:12:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:51.676 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:51.676 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:51.676 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:51.676 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:51.676 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:51.676 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:51.676 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:51.676 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:51.676 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:51.676 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.676 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.934 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.934 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.934 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:51.935 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:51.935 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:51.935 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:51.935 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:51.935 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:51.935 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:51.935 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:51.935 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:51.935 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:51.935 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.935 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:52.193 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.193 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:52.452 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:52.452 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.452 07:12:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:52.711 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.711 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:52.969 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:52.969 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.969 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:53.228 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:53.228 rmmod nvme_tcp 00:24:53.228 rmmod nvme_fabrics 00:24:53.228 rmmod nvme_keyring 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1570112 ']' 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1570112 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1570112 ']' 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1570112 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1570112 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1570112' 00:24:53.228 killing process with pid 1570112 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1570112 00:24:53.228 07:12:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1570112 00:24:53.794 07:12:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:53.794 07:12:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:53.794 07:12:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:53.794 07:12:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:53.794 07:12:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:53.794 07:12:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.794 07:12:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.794 07:12:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.324 07:12:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:56.324 00:24:56.324 real 1m0.435s 00:24:56.324 user 3m23.949s 00:24:56.324 sys 0m23.974s 00:24:56.324 07:12:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.324 07:12:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:56.324 ************************************ 00:24:56.324 END TEST nvmf_multiconnection 00:24:56.324 ************************************ 00:24:56.324 07:12:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:56.324 07:12:25 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:56.324 07:12:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:56.324 07:12:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.324 07:12:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:56.324 ************************************ 00:24:56.324 START TEST nvmf_initiator_timeout 00:24:56.324 ************************************ 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:56.324 * Looking for test storage... 00:24:56.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.324 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:56.325 07:12:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:58.226 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.226 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:58.227 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:58.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:58.227 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:58.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:24:58.227 00:24:58.227 --- 10.0.0.2 ping statistics --- 00:24:58.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.227 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:24:58.227 00:24:58.227 --- 10.0.0.1 ping statistics --- 00:24:58.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.227 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1579519 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1579519 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1579519 ']' 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.227 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.227 [2024-07-13 07:12:27.461066] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:24:58.227 [2024-07-13 07:12:27.461137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.227 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.227 [2024-07-13 07:12:27.497983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:58.227 [2024-07-13 07:12:27.529019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.227 [2024-07-13 07:12:27.622168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.227 [2024-07-13 07:12:27.622240] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.227 [2024-07-13 07:12:27.622257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.227 [2024-07-13 07:12:27.622270] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.227 [2024-07-13 07:12:27.622287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.227 [2024-07-13 07:12:27.622376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.227 [2024-07-13 07:12:27.622444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.227 [2024-07-13 07:12:27.622546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.227 [2024-07-13 07:12:27.622548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.486 Malloc0 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.486 Delay0 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.486 [2024-07-13 07:12:27.810645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.486 [2024-07-13 07:12:27.838910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.486 07:12:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:59.051 07:12:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:59.051 07:12:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:59.051 07:12:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:59.051 07:12:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:59.051 07:12:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:01.583 07:12:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:01.584 07:12:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:01.584 07:12:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:01.584 07:12:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:01.584 07:12:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.584 07:12:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:01.584 07:12:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1579948 00:25:01.584 07:12:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:01.584 07:12:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:01.584 [global] 00:25:01.584 thread=1 00:25:01.584 invalidate=1 00:25:01.584 rw=write 00:25:01.584 time_based=1 00:25:01.584 runtime=60 00:25:01.584 ioengine=libaio 00:25:01.584 direct=1 00:25:01.584 bs=4096 00:25:01.584 iodepth=1 00:25:01.584 norandommap=0 00:25:01.584 numjobs=1 00:25:01.584 00:25:01.584 verify_dump=1 00:25:01.584 verify_backlog=512 00:25:01.584 verify_state_save=0 00:25:01.584 do_verify=1 00:25:01.584 verify=crc32c-intel 00:25:01.584 [job0] 00:25:01.584 filename=/dev/nvme0n1 00:25:01.584 Could not set queue depth (nvme0n1) 00:25:01.584 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:01.584 fio-3.35 00:25:01.584 Starting 1 thread 00:25:04.105 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:04.105 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.105 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:04.105 true 00:25:04.105 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.105 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:04.105 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.105 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:04.105 true 00:25:04.106 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.106 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:04.106 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.106 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:04.106 true 00:25:04.106 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.106 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:04.106 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.106 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:04.106 true 00:25:04.106 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.106 07:12:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.379 true 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.379 true 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.379 true 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.379 true 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:07.379 07:12:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1579948 00:26:03.574 00:26:03.574 job0: (groupid=0, jobs=1): err= 0: pid=1580023: Sat Jul 13 07:13:30 2024 00:26:03.574 read: IOPS=7, BW=30.1KiB/s (30.8kB/s)(1808KiB/60037msec) 00:26:03.574 slat (usec): min=11, max=11536, avg=63.73, stdev=595.95 00:26:03.574 clat (usec): min=462, max=41063k, avg=132426.59, stdev=1929502.21 00:26:03.574 lat (msec): min=5, max=41063, avg=132.49, stdev=1929.50 00:26:03.574 clat percentiles (msec): 00:26:03.574 | 1.00th=[ 42], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:26:03.574 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 43], 00:26:03.574 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:26:03.574 | 99.00th=[ 43], 99.50th=[ 45], 99.90th=[17113], 99.95th=[17113], 00:26:03.574 | 99.99th=[17113] 00:26:03.574 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60037msec); 0 zone resets 00:26:03.574 slat (nsec): min=6381, max=46175, avg=17857.80, stdev=5840.04 00:26:03.574 clat (usec): min=218, max=3989, avg=264.02, stdev=166.21 00:26:03.574 lat (usec): min=226, max=4018, avg=281.88, stdev=167.03 00:26:03.574 clat percentiles (usec): 00:26:03.574 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 241], 00:26:03.574 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 00:26:03.574 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 289], 00:26:03.574 | 99.00th=[ 314], 99.50th=[ 375], 99.90th=[ 3982], 99.95th=[ 3982], 00:26:03.574 | 99.99th=[ 3982] 00:26:03.574 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:26:03.574 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:03.574 lat (usec) : 250=21.58%, 500=31.54% 00:26:03.574 lat (msec) : 4=0.10%, 50=46.68%, >=2000=0.10% 00:26:03.574 cpu : usr=0.02%, sys=0.04%, ctx=966, majf=0, minf=2 00:26:03.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.574 issued rwts: total=452,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:03.574 00:26:03.574 Run status group 0 (all jobs): 00:26:03.574 READ: bw=30.1KiB/s (30.8kB/s), 30.1KiB/s-30.1KiB/s (30.8kB/s-30.8kB/s), io=1808KiB (1851kB), run=60037-60037msec 00:26:03.574 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60037-60037msec 00:26:03.574 00:26:03.574 Disk stats (read/write): 00:26:03.574 nvme0n1: ios=547/512, merge=0/0, ticks=18728/131, in_queue=18859, util=99.68% 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:03.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:03.574 nvmf hotplug test: fio successful as expected 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:03.574 rmmod nvme_tcp 00:26:03.574 rmmod nvme_fabrics 00:26:03.574 rmmod nvme_keyring 00:26:03.574 07:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:03.574 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:03.574 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:03.574 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1579519 ']' 00:26:03.574 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1579519 00:26:03.574 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1579519 ']' 00:26:03.574 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1579519 00:26:03.574 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:03.574 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1579519 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1579519' 00:26:03.575 killing process with pid 1579519 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1579519 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1579519 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.575 07:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.139 07:13:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:04.139 00:26:04.139 real 1m8.065s 00:26:04.139 user 4m10.767s 00:26:04.139 sys 0m6.169s 00:26:04.139 07:13:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:04.139 07:13:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.139 ************************************ 00:26:04.139 END TEST nvmf_initiator_timeout 00:26:04.139 ************************************ 00:26:04.139 07:13:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:04.139 07:13:33 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:04.139 07:13:33 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:04.139 07:13:33 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:04.139 07:13:33 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:04.139 07:13:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:06.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:06.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:06.038 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:06.038 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:06.038 07:13:35 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.039 07:13:35 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:06.039 07:13:35 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.039 07:13:35 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:06.039 07:13:35 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:06.039 07:13:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:06.039 07:13:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.039 07:13:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:06.039 ************************************ 00:26:06.039 START TEST nvmf_perf_adq 00:26:06.039 ************************************ 00:26:06.039 07:13:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:06.039 * Looking for test storage... 00:26:06.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:06.297 07:13:35 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:06.298 07:13:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:08.203 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:08.203 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:08.203 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:08.203 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:08.203 07:13:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:08.769 07:13:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:10.667 07:13:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:15.933 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:15.934 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:15.934 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:15.934 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:15.934 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:15.934 07:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:15.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:26:15.934 00:26:15.934 --- 10.0.0.2 ping statistics --- 00:26:15.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.934 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:26:15.934 00:26:15.934 --- 10.0.0.1 ping statistics --- 00:26:15.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.934 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1591533 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1591533 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1591533 ']' 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:15.934 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:15.934 [2024-07-13 07:13:45.202710] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:15.934 [2024-07-13 07:13:45.202812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.934 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.934 [2024-07-13 07:13:45.240809] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:15.934 [2024-07-13 07:13:45.272736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:15.934 [2024-07-13 07:13:45.363459] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.934 [2024-07-13 07:13:45.363539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.934 [2024-07-13 07:13:45.363564] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.934 [2024-07-13 07:13:45.363578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.934 [2024-07-13 07:13:45.363590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.934 [2024-07-13 07:13:45.363683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.934 [2024-07-13 07:13:45.363762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.934 [2024-07-13 07:13:45.363845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.934 [2024-07-13 07:13:45.363847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.192 [2024-07-13 07:13:45.587820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.192 Malloc1 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.192 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.193 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.193 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.193 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.193 [2024-07-13 07:13:45.641263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.193 07:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.193 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1591567 00:26:16.193 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:16.193 07:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:16.450 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.349 07:13:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:18.349 07:13:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.349 07:13:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.349 07:13:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.349 07:13:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:18.349 "tick_rate": 2700000000, 00:26:18.349 "poll_groups": [ 00:26:18.349 { 00:26:18.349 "name": "nvmf_tgt_poll_group_000", 00:26:18.349 "admin_qpairs": 1, 00:26:18.349 "io_qpairs": 1, 00:26:18.349 "current_admin_qpairs": 1, 00:26:18.349 "current_io_qpairs": 1, 00:26:18.349 "pending_bdev_io": 0, 00:26:18.349 "completed_nvme_io": 20050, 00:26:18.349 "transports": [ 00:26:18.349 { 00:26:18.349 "trtype": "TCP" 00:26:18.349 } 00:26:18.349 ] 00:26:18.349 }, 00:26:18.349 { 00:26:18.349 "name": "nvmf_tgt_poll_group_001", 00:26:18.349 "admin_qpairs": 0, 00:26:18.349 "io_qpairs": 1, 00:26:18.349 "current_admin_qpairs": 0, 00:26:18.349 "current_io_qpairs": 1, 00:26:18.349 "pending_bdev_io": 0, 00:26:18.349 "completed_nvme_io": 20946, 00:26:18.349 "transports": [ 00:26:18.349 { 00:26:18.349 "trtype": "TCP" 00:26:18.349 } 00:26:18.349 ] 00:26:18.349 }, 00:26:18.349 { 00:26:18.349 "name": "nvmf_tgt_poll_group_002", 00:26:18.349 "admin_qpairs": 0, 00:26:18.349 "io_qpairs": 1, 00:26:18.349 "current_admin_qpairs": 0, 00:26:18.349 "current_io_qpairs": 1, 00:26:18.349 "pending_bdev_io": 0, 00:26:18.349 "completed_nvme_io": 20796, 00:26:18.349 "transports": [ 00:26:18.349 { 00:26:18.349 "trtype": "TCP" 00:26:18.349 } 00:26:18.349 ] 00:26:18.349 }, 00:26:18.349 { 00:26:18.349 "name": "nvmf_tgt_poll_group_003", 00:26:18.349 "admin_qpairs": 0, 00:26:18.349 "io_qpairs": 1, 00:26:18.349 "current_admin_qpairs": 0, 00:26:18.349 "current_io_qpairs": 1, 00:26:18.349 "pending_bdev_io": 0, 00:26:18.349 "completed_nvme_io": 20545, 00:26:18.349 "transports": [ 00:26:18.349 { 00:26:18.349 "trtype": "TCP" 00:26:18.349 } 00:26:18.349 ] 00:26:18.349 } 00:26:18.349 ] 00:26:18.349 }' 00:26:18.349 07:13:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:18.349 07:13:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:18.349 07:13:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:18.349 07:13:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:18.349 07:13:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1591567 00:26:26.452 Initializing NVMe Controllers 00:26:26.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:26.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:26.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:26.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:26.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:26.452 Initialization complete. Launching workers. 00:26:26.452 ======================================================== 00:26:26.453 Latency(us) 00:26:26.453 Device Information : IOPS MiB/s Average min max 00:26:26.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10892.36 42.55 5877.60 4842.02 7872.93 00:26:26.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11007.56 43.00 5816.02 4995.88 7722.09 00:26:26.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10955.86 42.80 5842.79 2276.63 9458.80 00:26:26.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10668.37 41.67 5998.39 5138.53 7841.92 00:26:26.453 ======================================================== 00:26:26.453 Total : 43524.15 170.02 5882.87 2276.63 9458.80 00:26:26.453 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:26.453 rmmod nvme_tcp 00:26:26.453 rmmod nvme_fabrics 00:26:26.453 rmmod nvme_keyring 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1591533 ']' 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1591533 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1591533 ']' 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1591533 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1591533 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1591533' 00:26:26.453 killing process with pid 1591533 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1591533 00:26:26.453 07:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1591533 00:26:26.710 07:13:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:26.710 07:13:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:26.710 07:13:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:26.710 07:13:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:26.711 07:13:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:26.711 07:13:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.711 07:13:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.711 07:13:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.244 07:13:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:29.244 07:13:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:29.244 07:13:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:29.508 07:13:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:31.427 07:14:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.692 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:36.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:36.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:36.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:36.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:36.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:26:36.693 00:26:36.693 --- 10.0.0.2 ping statistics --- 00:26:36.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.693 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:26:36.693 00:26:36.693 --- 10.0.0.1 ping statistics --- 00:26:36.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.693 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:36.693 07:14:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:36.693 net.core.busy_poll = 1 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:36.693 net.core.busy_read = 1 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1594174 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1594174 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1594174 ']' 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:36.693 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.002 [2024-07-13 07:14:06.173499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:37.002 [2024-07-13 07:14:06.173587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.002 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.002 [2024-07-13 07:14:06.212413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:37.002 [2024-07-13 07:14:06.239349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:37.002 [2024-07-13 07:14:06.333313] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.002 [2024-07-13 07:14:06.333372] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.002 [2024-07-13 07:14:06.333388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.002 [2024-07-13 07:14:06.333401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.002 [2024-07-13 07:14:06.333413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.002 [2024-07-13 07:14:06.333504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.002 [2024-07-13 07:14:06.333552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.002 [2024-07-13 07:14:06.333668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:37.002 [2024-07-13 07:14:06.333670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.002 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.264 [2024-07-13 07:14:06.583495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.264 Malloc1 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.264 [2024-07-13 07:14:06.636747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1594321 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:37.264 07:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:37.264 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.790 07:14:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:39.790 07:14:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.790 07:14:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.790 07:14:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.790 07:14:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:39.790 "tick_rate": 2700000000, 00:26:39.790 "poll_groups": [ 00:26:39.790 { 00:26:39.790 "name": "nvmf_tgt_poll_group_000", 00:26:39.790 "admin_qpairs": 1, 00:26:39.790 "io_qpairs": 2, 00:26:39.790 "current_admin_qpairs": 1, 00:26:39.790 "current_io_qpairs": 2, 00:26:39.790 "pending_bdev_io": 0, 00:26:39.790 "completed_nvme_io": 24180, 00:26:39.790 "transports": [ 00:26:39.790 { 00:26:39.790 "trtype": "TCP" 00:26:39.790 } 00:26:39.790 ] 00:26:39.790 }, 00:26:39.790 { 00:26:39.790 "name": "nvmf_tgt_poll_group_001", 00:26:39.790 "admin_qpairs": 0, 00:26:39.790 "io_qpairs": 2, 00:26:39.790 "current_admin_qpairs": 0, 00:26:39.790 "current_io_qpairs": 2, 00:26:39.790 "pending_bdev_io": 0, 00:26:39.790 "completed_nvme_io": 27482, 00:26:39.790 "transports": [ 00:26:39.790 { 00:26:39.790 "trtype": "TCP" 00:26:39.790 } 00:26:39.790 ] 00:26:39.790 }, 00:26:39.790 { 00:26:39.790 "name": "nvmf_tgt_poll_group_002", 00:26:39.790 "admin_qpairs": 0, 00:26:39.790 "io_qpairs": 0, 00:26:39.790 "current_admin_qpairs": 0, 00:26:39.790 "current_io_qpairs": 0, 00:26:39.790 "pending_bdev_io": 0, 00:26:39.790 "completed_nvme_io": 0, 00:26:39.790 "transports": [ 00:26:39.790 { 00:26:39.790 "trtype": "TCP" 00:26:39.790 } 00:26:39.790 ] 00:26:39.790 }, 00:26:39.790 { 00:26:39.790 "name": "nvmf_tgt_poll_group_003", 00:26:39.790 "admin_qpairs": 0, 00:26:39.790 "io_qpairs": 0, 00:26:39.790 "current_admin_qpairs": 0, 00:26:39.790 "current_io_qpairs": 0, 00:26:39.790 "pending_bdev_io": 0, 00:26:39.790 "completed_nvme_io": 0, 00:26:39.790 "transports": [ 00:26:39.790 { 00:26:39.790 "trtype": "TCP" 00:26:39.790 } 00:26:39.790 ] 00:26:39.790 } 00:26:39.790 ] 00:26:39.790 }' 00:26:39.790 07:14:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:39.790 07:14:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:39.790 07:14:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:39.790 07:14:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:39.790 07:14:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1594321 00:26:47.890 Initializing NVMe Controllers 00:26:47.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:47.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:47.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:47.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:47.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:47.890 Initialization complete. Launching workers. 00:26:47.890 ======================================================== 00:26:47.890 Latency(us) 00:26:47.890 Device Information : IOPS MiB/s Average min max 00:26:47.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7166.50 27.99 8962.60 2696.82 53348.09 00:26:47.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8094.20 31.62 7908.27 2004.18 51444.11 00:26:47.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5524.60 21.58 11606.46 1763.23 56863.39 00:26:47.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5855.50 22.87 10953.95 1746.24 54024.12 00:26:47.890 ======================================================== 00:26:47.890 Total : 26640.80 104.07 9628.22 1746.24 56863.39 00:26:47.890 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:47.890 rmmod nvme_tcp 00:26:47.890 rmmod nvme_fabrics 00:26:47.890 rmmod nvme_keyring 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1594174 ']' 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1594174 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1594174 ']' 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1594174 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:47.890 07:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1594174 00:26:47.891 07:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:47.891 07:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:47.891 07:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1594174' 00:26:47.891 killing process with pid 1594174 00:26:47.891 07:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1594174 00:26:47.891 07:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1594174 00:26:47.891 07:14:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:47.891 07:14:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:47.891 07:14:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:47.891 07:14:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.891 07:14:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:47.891 07:14:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.891 07:14:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.891 07:14:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.180 07:14:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:51.180 07:14:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:51.180 00:26:51.180 real 0m44.712s 00:26:51.180 user 2m37.263s 00:26:51.180 sys 0m10.314s 00:26:51.180 07:14:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.180 07:14:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.180 ************************************ 00:26:51.180 END TEST nvmf_perf_adq 00:26:51.180 ************************************ 00:26:51.180 07:14:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:51.180 07:14:20 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:51.180 07:14:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:51.180 07:14:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.181 07:14:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:51.181 ************************************ 00:26:51.181 START TEST nvmf_shutdown 00:26:51.181 ************************************ 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:51.181 * Looking for test storage... 00:26:51.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:51.181 ************************************ 00:26:51.181 START TEST nvmf_shutdown_tc1 00:26:51.181 ************************************ 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.181 07:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:53.077 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.077 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:53.077 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:53.077 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:53.077 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:53.077 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:53.077 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:53.077 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:53.078 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:53.078 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:53.078 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:53.078 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:53.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:26:53.078 00:26:53.078 --- 10.0.0.2 ping statistics --- 00:26:53.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.078 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:26:53.078 00:26:53.078 --- 10.0.0.1 ping statistics --- 00:26:53.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.078 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1597601 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1597601 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1597601 ']' 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.078 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.079 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.079 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:53.079 [2024-07-13 07:14:22.519482] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:53.079 [2024-07-13 07:14:22.519582] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.336 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.336 [2024-07-13 07:14:22.563822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:53.336 [2024-07-13 07:14:22.589987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.336 [2024-07-13 07:14:22.680923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.336 [2024-07-13 07:14:22.680985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.336 [2024-07-13 07:14:22.681000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.336 [2024-07-13 07:14:22.681011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.336 [2024-07-13 07:14:22.681021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.336 [2024-07-13 07:14:22.681076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.336 [2024-07-13 07:14:22.681104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.336 [2024-07-13 07:14:22.681129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:53.336 [2024-07-13 07:14:22.681132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.593 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.593 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:26:53.593 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:53.593 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:53.594 [2024-07-13 07:14:22.821507] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.594 07:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:53.594 Malloc1 00:26:53.594 [2024-07-13 07:14:22.896595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.594 Malloc2 00:26:53.594 Malloc3 00:26:53.594 Malloc4 00:26:53.852 Malloc5 00:26:53.852 Malloc6 00:26:53.852 Malloc7 00:26:53.852 Malloc8 00:26:53.852 Malloc9 00:26:53.852 Malloc10 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1597781 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1597781 /var/tmp/bdevperf.sock 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1597781 ']' 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:54.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.110 { 00:26:54.110 "params": { 00:26:54.110 "name": "Nvme$subsystem", 00:26:54.110 "trtype": "$TEST_TRANSPORT", 00:26:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.110 "adrfam": "ipv4", 00:26:54.110 "trsvcid": "$NVMF_PORT", 00:26:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.110 "hdgst": ${hdgst:-false}, 00:26:54.110 "ddgst": ${ddgst:-false} 00:26:54.110 }, 00:26:54.110 "method": "bdev_nvme_attach_controller" 00:26:54.110 } 00:26:54.110 EOF 00:26:54.110 )") 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.110 { 00:26:54.110 "params": { 00:26:54.110 "name": "Nvme$subsystem", 00:26:54.110 "trtype": "$TEST_TRANSPORT", 00:26:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.110 "adrfam": "ipv4", 00:26:54.110 "trsvcid": "$NVMF_PORT", 00:26:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.110 "hdgst": ${hdgst:-false}, 00:26:54.110 "ddgst": ${ddgst:-false} 00:26:54.110 }, 00:26:54.110 "method": "bdev_nvme_attach_controller" 00:26:54.110 } 00:26:54.110 EOF 00:26:54.110 )") 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.110 { 00:26:54.110 "params": { 00:26:54.110 "name": "Nvme$subsystem", 00:26:54.110 "trtype": "$TEST_TRANSPORT", 00:26:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.110 "adrfam": "ipv4", 00:26:54.110 "trsvcid": "$NVMF_PORT", 00:26:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.110 "hdgst": ${hdgst:-false}, 00:26:54.110 "ddgst": ${ddgst:-false} 00:26:54.110 }, 00:26:54.110 "method": "bdev_nvme_attach_controller" 00:26:54.110 } 00:26:54.110 EOF 00:26:54.110 )") 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.110 { 00:26:54.110 "params": { 00:26:54.110 "name": "Nvme$subsystem", 00:26:54.110 "trtype": "$TEST_TRANSPORT", 00:26:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.110 "adrfam": "ipv4", 00:26:54.110 "trsvcid": "$NVMF_PORT", 00:26:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.110 "hdgst": ${hdgst:-false}, 00:26:54.110 "ddgst": ${ddgst:-false} 00:26:54.110 }, 00:26:54.110 "method": "bdev_nvme_attach_controller" 00:26:54.110 } 00:26:54.110 EOF 00:26:54.110 )") 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.110 { 00:26:54.110 "params": { 00:26:54.110 "name": "Nvme$subsystem", 00:26:54.110 "trtype": "$TEST_TRANSPORT", 00:26:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.110 "adrfam": "ipv4", 00:26:54.110 "trsvcid": "$NVMF_PORT", 00:26:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.110 "hdgst": ${hdgst:-false}, 00:26:54.110 "ddgst": ${ddgst:-false} 00:26:54.110 }, 00:26:54.110 "method": "bdev_nvme_attach_controller" 00:26:54.110 } 00:26:54.110 EOF 00:26:54.110 )") 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.110 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.110 { 00:26:54.110 "params": { 00:26:54.110 "name": "Nvme$subsystem", 00:26:54.110 "trtype": "$TEST_TRANSPORT", 00:26:54.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.110 "adrfam": "ipv4", 00:26:54.110 "trsvcid": "$NVMF_PORT", 00:26:54.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.110 "hdgst": ${hdgst:-false}, 00:26:54.110 "ddgst": ${ddgst:-false} 00:26:54.110 }, 00:26:54.110 "method": "bdev_nvme_attach_controller" 00:26:54.110 } 00:26:54.110 EOF 00:26:54.111 )") 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.111 { 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme$subsystem", 00:26:54.111 "trtype": "$TEST_TRANSPORT", 00:26:54.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "$NVMF_PORT", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.111 "hdgst": ${hdgst:-false}, 00:26:54.111 "ddgst": ${ddgst:-false} 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 } 00:26:54.111 EOF 00:26:54.111 )") 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.111 { 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme$subsystem", 00:26:54.111 "trtype": "$TEST_TRANSPORT", 00:26:54.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "$NVMF_PORT", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.111 "hdgst": ${hdgst:-false}, 00:26:54.111 "ddgst": ${ddgst:-false} 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 } 00:26:54.111 EOF 00:26:54.111 )") 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.111 { 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme$subsystem", 00:26:54.111 "trtype": "$TEST_TRANSPORT", 00:26:54.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "$NVMF_PORT", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.111 "hdgst": ${hdgst:-false}, 00:26:54.111 "ddgst": ${ddgst:-false} 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 } 00:26:54.111 EOF 00:26:54.111 )") 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.111 { 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme$subsystem", 00:26:54.111 "trtype": "$TEST_TRANSPORT", 00:26:54.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "$NVMF_PORT", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.111 "hdgst": ${hdgst:-false}, 00:26:54.111 "ddgst": ${ddgst:-false} 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 } 00:26:54.111 EOF 00:26:54.111 )") 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:54.111 07:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme1", 00:26:54.111 "trtype": "tcp", 00:26:54.111 "traddr": "10.0.0.2", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "4420", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:54.111 "hdgst": false, 00:26:54.111 "ddgst": false 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 },{ 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme2", 00:26:54.111 "trtype": "tcp", 00:26:54.111 "traddr": "10.0.0.2", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "4420", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:54.111 "hdgst": false, 00:26:54.111 "ddgst": false 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 },{ 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme3", 00:26:54.111 "trtype": "tcp", 00:26:54.111 "traddr": "10.0.0.2", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "4420", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:54.111 "hdgst": false, 00:26:54.111 "ddgst": false 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 },{ 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme4", 00:26:54.111 "trtype": "tcp", 00:26:54.111 "traddr": "10.0.0.2", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "4420", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:54.111 "hdgst": false, 00:26:54.111 "ddgst": false 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 },{ 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme5", 00:26:54.111 "trtype": "tcp", 00:26:54.111 "traddr": "10.0.0.2", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "4420", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:54.111 "hdgst": false, 00:26:54.111 "ddgst": false 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 },{ 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme6", 00:26:54.111 "trtype": "tcp", 00:26:54.111 "traddr": "10.0.0.2", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "4420", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:54.111 "hdgst": false, 00:26:54.111 "ddgst": false 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 },{ 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme7", 00:26:54.111 "trtype": "tcp", 00:26:54.111 "traddr": "10.0.0.2", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "4420", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:54.111 "hdgst": false, 00:26:54.111 "ddgst": false 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 },{ 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme8", 00:26:54.111 "trtype": "tcp", 00:26:54.111 "traddr": "10.0.0.2", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "4420", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:54.111 "hdgst": false, 00:26:54.111 "ddgst": false 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 },{ 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme9", 00:26:54.111 "trtype": "tcp", 00:26:54.111 "traddr": "10.0.0.2", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "4420", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:54.111 "hdgst": false, 00:26:54.111 "ddgst": false 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 },{ 00:26:54.111 "params": { 00:26:54.111 "name": "Nvme10", 00:26:54.111 "trtype": "tcp", 00:26:54.111 "traddr": "10.0.0.2", 00:26:54.111 "adrfam": "ipv4", 00:26:54.111 "trsvcid": "4420", 00:26:54.111 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:54.111 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:54.111 "hdgst": false, 00:26:54.111 "ddgst": false 00:26:54.111 }, 00:26:54.111 "method": "bdev_nvme_attach_controller" 00:26:54.111 }' 00:26:54.111 [2024-07-13 07:14:23.388569] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:54.111 [2024-07-13 07:14:23.388641] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:54.111 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.111 [2024-07-13 07:14:23.424798] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:54.111 [2024-07-13 07:14:23.453827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.111 [2024-07-13 07:14:23.540614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.005 07:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:56.005 07:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:26:56.005 07:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:56.005 07:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.005 07:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.005 07:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.006 07:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1597781 00:26:56.006 07:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:56.006 07:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:56.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1597781 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:56.937 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1597601 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.938 { 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme$subsystem", 00:26:56.938 "trtype": "$TEST_TRANSPORT", 00:26:56.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "$NVMF_PORT", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.938 "hdgst": ${hdgst:-false}, 00:26:56.938 "ddgst": ${ddgst:-false} 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 } 00:26:56.938 EOF 00:26:56.938 )") 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.938 { 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme$subsystem", 00:26:56.938 "trtype": "$TEST_TRANSPORT", 00:26:56.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "$NVMF_PORT", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.938 "hdgst": ${hdgst:-false}, 00:26:56.938 "ddgst": ${ddgst:-false} 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 } 00:26:56.938 EOF 00:26:56.938 )") 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.938 { 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme$subsystem", 00:26:56.938 "trtype": "$TEST_TRANSPORT", 00:26:56.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "$NVMF_PORT", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.938 "hdgst": ${hdgst:-false}, 00:26:56.938 "ddgst": ${ddgst:-false} 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 } 00:26:56.938 EOF 00:26:56.938 )") 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.938 { 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme$subsystem", 00:26:56.938 "trtype": "$TEST_TRANSPORT", 00:26:56.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "$NVMF_PORT", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.938 "hdgst": ${hdgst:-false}, 00:26:56.938 "ddgst": ${ddgst:-false} 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 } 00:26:56.938 EOF 00:26:56.938 )") 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.938 { 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme$subsystem", 00:26:56.938 "trtype": "$TEST_TRANSPORT", 00:26:56.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "$NVMF_PORT", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.938 "hdgst": ${hdgst:-false}, 00:26:56.938 "ddgst": ${ddgst:-false} 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 } 00:26:56.938 EOF 00:26:56.938 )") 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.938 { 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme$subsystem", 00:26:56.938 "trtype": "$TEST_TRANSPORT", 00:26:56.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "$NVMF_PORT", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.938 "hdgst": ${hdgst:-false}, 00:26:56.938 "ddgst": ${ddgst:-false} 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 } 00:26:56.938 EOF 00:26:56.938 )") 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.938 { 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme$subsystem", 00:26:56.938 "trtype": "$TEST_TRANSPORT", 00:26:56.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "$NVMF_PORT", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.938 "hdgst": ${hdgst:-false}, 00:26:56.938 "ddgst": ${ddgst:-false} 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 } 00:26:56.938 EOF 00:26:56.938 )") 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.938 { 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme$subsystem", 00:26:56.938 "trtype": "$TEST_TRANSPORT", 00:26:56.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "$NVMF_PORT", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.938 "hdgst": ${hdgst:-false}, 00:26:56.938 "ddgst": ${ddgst:-false} 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 } 00:26:56.938 EOF 00:26:56.938 )") 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.938 { 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme$subsystem", 00:26:56.938 "trtype": "$TEST_TRANSPORT", 00:26:56.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "$NVMF_PORT", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.938 "hdgst": ${hdgst:-false}, 00:26:56.938 "ddgst": ${ddgst:-false} 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 } 00:26:56.938 EOF 00:26:56.938 )") 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.938 { 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme$subsystem", 00:26:56.938 "trtype": "$TEST_TRANSPORT", 00:26:56.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "$NVMF_PORT", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.938 "hdgst": ${hdgst:-false}, 00:26:56.938 "ddgst": ${ddgst:-false} 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 } 00:26:56.938 EOF 00:26:56.938 )") 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:56.938 07:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme1", 00:26:56.938 "trtype": "tcp", 00:26:56.938 "traddr": "10.0.0.2", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "4420", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:56.938 "hdgst": false, 00:26:56.938 "ddgst": false 00:26:56.938 }, 00:26:56.938 "method": "bdev_nvme_attach_controller" 00:26:56.938 },{ 00:26:56.938 "params": { 00:26:56.938 "name": "Nvme2", 00:26:56.938 "trtype": "tcp", 00:26:56.938 "traddr": "10.0.0.2", 00:26:56.938 "adrfam": "ipv4", 00:26:56.938 "trsvcid": "4420", 00:26:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:56.938 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:56.938 "hdgst": false, 00:26:56.938 "ddgst": false 00:26:56.938 }, 00:26:56.939 "method": "bdev_nvme_attach_controller" 00:26:56.939 },{ 00:26:56.939 "params": { 00:26:56.939 "name": "Nvme3", 00:26:56.939 "trtype": "tcp", 00:26:56.939 "traddr": "10.0.0.2", 00:26:56.939 "adrfam": "ipv4", 00:26:56.939 "trsvcid": "4420", 00:26:56.939 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:56.939 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:56.939 "hdgst": false, 00:26:56.939 "ddgst": false 00:26:56.939 }, 00:26:56.939 "method": "bdev_nvme_attach_controller" 00:26:56.939 },{ 00:26:56.939 "params": { 00:26:56.939 "name": "Nvme4", 00:26:56.939 "trtype": "tcp", 00:26:56.939 "traddr": "10.0.0.2", 00:26:56.939 "adrfam": "ipv4", 00:26:56.939 "trsvcid": "4420", 00:26:56.939 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:56.939 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:56.939 "hdgst": false, 00:26:56.939 "ddgst": false 00:26:56.939 }, 00:26:56.939 "method": "bdev_nvme_attach_controller" 00:26:56.939 },{ 00:26:56.939 "params": { 00:26:56.939 "name": "Nvme5", 00:26:56.939 "trtype": "tcp", 00:26:56.939 "traddr": "10.0.0.2", 00:26:56.939 "adrfam": "ipv4", 00:26:56.939 "trsvcid": "4420", 00:26:56.939 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:56.939 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:56.939 "hdgst": false, 00:26:56.939 "ddgst": false 00:26:56.939 }, 00:26:56.939 "method": "bdev_nvme_attach_controller" 00:26:56.939 },{ 00:26:56.939 "params": { 00:26:56.939 "name": "Nvme6", 00:26:56.939 "trtype": "tcp", 00:26:56.939 "traddr": "10.0.0.2", 00:26:56.939 "adrfam": "ipv4", 00:26:56.939 "trsvcid": "4420", 00:26:56.939 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:56.939 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:56.939 "hdgst": false, 00:26:56.939 "ddgst": false 00:26:56.939 }, 00:26:56.939 "method": "bdev_nvme_attach_controller" 00:26:56.939 },{ 00:26:56.939 "params": { 00:26:56.939 "name": "Nvme7", 00:26:56.939 "trtype": "tcp", 00:26:56.939 "traddr": "10.0.0.2", 00:26:56.939 "adrfam": "ipv4", 00:26:56.939 "trsvcid": "4420", 00:26:56.939 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:56.939 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:56.939 "hdgst": false, 00:26:56.939 "ddgst": false 00:26:56.939 }, 00:26:56.939 "method": "bdev_nvme_attach_controller" 00:26:56.939 },{ 00:26:56.939 "params": { 00:26:56.939 "name": "Nvme8", 00:26:56.939 "trtype": "tcp", 00:26:56.939 "traddr": "10.0.0.2", 00:26:56.939 "adrfam": "ipv4", 00:26:56.939 "trsvcid": "4420", 00:26:56.939 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:56.939 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:56.939 "hdgst": false, 00:26:56.939 "ddgst": false 00:26:56.939 }, 00:26:56.939 "method": "bdev_nvme_attach_controller" 00:26:56.939 },{ 00:26:56.939 "params": { 00:26:56.939 "name": "Nvme9", 00:26:56.939 "trtype": "tcp", 00:26:56.939 "traddr": "10.0.0.2", 00:26:56.939 "adrfam": "ipv4", 00:26:56.939 "trsvcid": "4420", 00:26:56.939 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:56.939 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:56.939 "hdgst": false, 00:26:56.939 "ddgst": false 00:26:56.939 }, 00:26:56.939 "method": "bdev_nvme_attach_controller" 00:26:56.939 },{ 00:26:56.939 "params": { 00:26:56.939 "name": "Nvme10", 00:26:56.939 "trtype": "tcp", 00:26:56.939 "traddr": "10.0.0.2", 00:26:56.939 "adrfam": "ipv4", 00:26:56.939 "trsvcid": "4420", 00:26:56.939 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:56.939 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:56.939 "hdgst": false, 00:26:56.939 "ddgst": false 00:26:56.939 }, 00:26:56.939 "method": "bdev_nvme_attach_controller" 00:26:56.939 }' 00:26:56.939 [2024-07-13 07:14:26.391695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:56.939 [2024-07-13 07:14:26.391778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598084 ] 00:26:57.197 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.197 [2024-07-13 07:14:26.428108] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:57.197 [2024-07-13 07:14:26.456684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.197 [2024-07-13 07:14:26.543270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.105 Running I/O for 1 seconds... 00:27:00.042 00:27:00.042 Latency(us) 00:27:00.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.042 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:00.042 Verification LBA range: start 0x0 length 0x400 00:27:00.042 Nvme1n1 : 1.13 225.85 14.12 0.00 0.00 280582.26 18738.44 253211.69 00:27:00.042 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:00.042 Verification LBA range: start 0x0 length 0x400 00:27:00.042 Nvme2n1 : 1.14 224.64 14.04 0.00 0.00 277579.28 21651.15 257872.02 00:27:00.042 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:00.042 Verification LBA range: start 0x0 length 0x400 00:27:00.042 Nvme3n1 : 1.07 244.05 15.25 0.00 0.00 249908.84 4733.16 239230.67 00:27:00.042 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:00.042 Verification LBA range: start 0x0 length 0x400 00:27:00.042 Nvme4n1 : 1.12 232.32 14.52 0.00 0.00 253453.11 19029.71 250104.79 00:27:00.042 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:00.042 Verification LBA range: start 0x0 length 0x400 00:27:00.042 Nvme5n1 : 1.15 221.77 13.86 0.00 0.00 267574.04 21359.88 271853.04 00:27:00.042 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:00.042 Verification LBA range: start 0x0 length 0x400 00:27:00.042 Nvme6n1 : 1.15 223.05 13.94 0.00 0.00 261262.60 18544.26 254765.13 00:27:00.042 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:00.042 Verification LBA range: start 0x0 length 0x400 00:27:00.042 Nvme7n1 : 1.19 268.87 16.80 0.00 0.00 213862.17 16019.91 253211.69 00:27:00.043 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:00.043 Verification LBA range: start 0x0 length 0x400 00:27:00.043 Nvme8n1 : 1.18 270.43 16.90 0.00 0.00 208690.14 19029.71 253211.69 00:27:00.043 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:00.043 Verification LBA range: start 0x0 length 0x400 00:27:00.043 Nvme9n1 : 1.16 220.57 13.79 0.00 0.00 251235.18 20388.98 271853.04 00:27:00.043 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:00.043 Verification LBA range: start 0x0 length 0x400 00:27:00.043 Nvme10n1 : 1.21 264.03 16.50 0.00 0.00 207681.23 9514.86 285834.05 00:27:00.043 =================================================================================================================== 00:27:00.043 Total : 2395.58 149.72 0.00 0.00 244617.90 4733.16 285834.05 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.303 rmmod nvme_tcp 00:27:00.303 rmmod nvme_fabrics 00:27:00.303 rmmod nvme_keyring 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1597601 ']' 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1597601 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1597601 ']' 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1597601 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1597601 00:27:00.303 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:00.304 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:00.304 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1597601' 00:27:00.304 killing process with pid 1597601 00:27:00.304 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1597601 00:27:00.304 07:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1597601 00:27:00.871 07:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:00.871 07:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:00.871 07:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:00.871 07:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.871 07:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.871 07:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.871 07:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.871 07:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:02.776 00:27:02.776 real 0m11.861s 00:27:02.776 user 0m34.355s 00:27:02.776 sys 0m3.237s 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:02.776 ************************************ 00:27:02.776 END TEST nvmf_shutdown_tc1 00:27:02.776 ************************************ 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:02.776 ************************************ 00:27:02.776 START TEST nvmf_shutdown_tc2 00:27:02.776 ************************************ 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.776 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:03.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:03.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.036 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:03.037 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:03.037 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:03.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:27:03.037 00:27:03.037 --- 10.0.0.2 ping statistics --- 00:27:03.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.037 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:27:03.037 00:27:03.037 --- 10.0.0.1 ping statistics --- 00:27:03.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.037 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1598931 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1598931 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1598931 ']' 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.037 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.037 [2024-07-13 07:14:32.458651] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:03.037 [2024-07-13 07:14:32.458750] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.297 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.297 [2024-07-13 07:14:32.504138] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:03.297 [2024-07-13 07:14:32.536487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:03.297 [2024-07-13 07:14:32.635733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.297 [2024-07-13 07:14:32.635795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.297 [2024-07-13 07:14:32.635812] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.297 [2024-07-13 07:14:32.635826] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.297 [2024-07-13 07:14:32.635837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.297 [2024-07-13 07:14:32.635932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.297 [2024-07-13 07:14:32.639885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:03.297 [2024-07-13 07:14:32.639919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:03.297 [2024-07-13 07:14:32.639923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.562 [2024-07-13 07:14:32.792803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.562 07:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.562 Malloc1 00:27:03.562 [2024-07-13 07:14:32.878483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.562 Malloc2 00:27:03.562 Malloc3 00:27:03.562 Malloc4 00:27:03.822 Malloc5 00:27:03.822 Malloc6 00:27:03.822 Malloc7 00:27:03.822 Malloc8 00:27:03.822 Malloc9 00:27:04.081 Malloc10 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1599029 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1599029 /var/tmp/bdevperf.sock 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1599029 ']' 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:04.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.081 { 00:27:04.081 "params": { 00:27:04.081 "name": "Nvme$subsystem", 00:27:04.081 "trtype": "$TEST_TRANSPORT", 00:27:04.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.081 "adrfam": "ipv4", 00:27:04.081 "trsvcid": "$NVMF_PORT", 00:27:04.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.081 "hdgst": ${hdgst:-false}, 00:27:04.081 "ddgst": ${ddgst:-false} 00:27:04.081 }, 00:27:04.081 "method": "bdev_nvme_attach_controller" 00:27:04.081 } 00:27:04.081 EOF 00:27:04.081 )") 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.081 { 00:27:04.081 "params": { 00:27:04.081 "name": "Nvme$subsystem", 00:27:04.081 "trtype": "$TEST_TRANSPORT", 00:27:04.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.081 "adrfam": "ipv4", 00:27:04.081 "trsvcid": "$NVMF_PORT", 00:27:04.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.081 "hdgst": ${hdgst:-false}, 00:27:04.081 "ddgst": ${ddgst:-false} 00:27:04.081 }, 00:27:04.081 "method": "bdev_nvme_attach_controller" 00:27:04.081 } 00:27:04.081 EOF 00:27:04.081 )") 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.081 { 00:27:04.081 "params": { 00:27:04.081 "name": "Nvme$subsystem", 00:27:04.081 "trtype": "$TEST_TRANSPORT", 00:27:04.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.081 "adrfam": "ipv4", 00:27:04.081 "trsvcid": "$NVMF_PORT", 00:27:04.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.081 "hdgst": ${hdgst:-false}, 00:27:04.081 "ddgst": ${ddgst:-false} 00:27:04.081 }, 00:27:04.081 "method": "bdev_nvme_attach_controller" 00:27:04.081 } 00:27:04.081 EOF 00:27:04.081 )") 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.081 { 00:27:04.081 "params": { 00:27:04.081 "name": "Nvme$subsystem", 00:27:04.081 "trtype": "$TEST_TRANSPORT", 00:27:04.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.081 "adrfam": "ipv4", 00:27:04.081 "trsvcid": "$NVMF_PORT", 00:27:04.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.081 "hdgst": ${hdgst:-false}, 00:27:04.081 "ddgst": ${ddgst:-false} 00:27:04.081 }, 00:27:04.081 "method": "bdev_nvme_attach_controller" 00:27:04.081 } 00:27:04.081 EOF 00:27:04.081 )") 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.081 { 00:27:04.081 "params": { 00:27:04.081 "name": "Nvme$subsystem", 00:27:04.081 "trtype": "$TEST_TRANSPORT", 00:27:04.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.081 "adrfam": "ipv4", 00:27:04.081 "trsvcid": "$NVMF_PORT", 00:27:04.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.081 "hdgst": ${hdgst:-false}, 00:27:04.081 "ddgst": ${ddgst:-false} 00:27:04.081 }, 00:27:04.081 "method": "bdev_nvme_attach_controller" 00:27:04.081 } 00:27:04.081 EOF 00:27:04.081 )") 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.081 { 00:27:04.081 "params": { 00:27:04.081 "name": "Nvme$subsystem", 00:27:04.081 "trtype": "$TEST_TRANSPORT", 00:27:04.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.081 "adrfam": "ipv4", 00:27:04.081 "trsvcid": "$NVMF_PORT", 00:27:04.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.081 "hdgst": ${hdgst:-false}, 00:27:04.081 "ddgst": ${ddgst:-false} 00:27:04.081 }, 00:27:04.081 "method": "bdev_nvme_attach_controller" 00:27:04.081 } 00:27:04.081 EOF 00:27:04.081 )") 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.081 { 00:27:04.081 "params": { 00:27:04.081 "name": "Nvme$subsystem", 00:27:04.081 "trtype": "$TEST_TRANSPORT", 00:27:04.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.081 "adrfam": "ipv4", 00:27:04.081 "trsvcid": "$NVMF_PORT", 00:27:04.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.081 "hdgst": ${hdgst:-false}, 00:27:04.081 "ddgst": ${ddgst:-false} 00:27:04.081 }, 00:27:04.081 "method": "bdev_nvme_attach_controller" 00:27:04.081 } 00:27:04.081 EOF 00:27:04.081 )") 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.081 { 00:27:04.081 "params": { 00:27:04.081 "name": "Nvme$subsystem", 00:27:04.081 "trtype": "$TEST_TRANSPORT", 00:27:04.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.081 "adrfam": "ipv4", 00:27:04.081 "trsvcid": "$NVMF_PORT", 00:27:04.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.081 "hdgst": ${hdgst:-false}, 00:27:04.081 "ddgst": ${ddgst:-false} 00:27:04.081 }, 00:27:04.081 "method": "bdev_nvme_attach_controller" 00:27:04.081 } 00:27:04.081 EOF 00:27:04.081 )") 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.081 { 00:27:04.081 "params": { 00:27:04.081 "name": "Nvme$subsystem", 00:27:04.081 "trtype": "$TEST_TRANSPORT", 00:27:04.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.081 "adrfam": "ipv4", 00:27:04.081 "trsvcid": "$NVMF_PORT", 00:27:04.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.081 "hdgst": ${hdgst:-false}, 00:27:04.081 "ddgst": ${ddgst:-false} 00:27:04.081 }, 00:27:04.081 "method": "bdev_nvme_attach_controller" 00:27:04.081 } 00:27:04.081 EOF 00:27:04.081 )") 00:27:04.081 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:04.082 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.082 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.082 { 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme$subsystem", 00:27:04.082 "trtype": "$TEST_TRANSPORT", 00:27:04.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "$NVMF_PORT", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.082 "hdgst": ${hdgst:-false}, 00:27:04.082 "ddgst": ${ddgst:-false} 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 } 00:27:04.082 EOF 00:27:04.082 )") 00:27:04.082 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:04.082 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:04.082 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:04.082 07:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme1", 00:27:04.082 "trtype": "tcp", 00:27:04.082 "traddr": "10.0.0.2", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "4420", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:04.082 "hdgst": false, 00:27:04.082 "ddgst": false 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 },{ 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme2", 00:27:04.082 "trtype": "tcp", 00:27:04.082 "traddr": "10.0.0.2", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "4420", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:04.082 "hdgst": false, 00:27:04.082 "ddgst": false 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 },{ 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme3", 00:27:04.082 "trtype": "tcp", 00:27:04.082 "traddr": "10.0.0.2", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "4420", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:04.082 "hdgst": false, 00:27:04.082 "ddgst": false 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 },{ 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme4", 00:27:04.082 "trtype": "tcp", 00:27:04.082 "traddr": "10.0.0.2", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "4420", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:04.082 "hdgst": false, 00:27:04.082 "ddgst": false 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 },{ 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme5", 00:27:04.082 "trtype": "tcp", 00:27:04.082 "traddr": "10.0.0.2", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "4420", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:04.082 "hdgst": false, 00:27:04.082 "ddgst": false 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 },{ 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme6", 00:27:04.082 "trtype": "tcp", 00:27:04.082 "traddr": "10.0.0.2", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "4420", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:04.082 "hdgst": false, 00:27:04.082 "ddgst": false 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 },{ 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme7", 00:27:04.082 "trtype": "tcp", 00:27:04.082 "traddr": "10.0.0.2", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "4420", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:04.082 "hdgst": false, 00:27:04.082 "ddgst": false 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 },{ 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme8", 00:27:04.082 "trtype": "tcp", 00:27:04.082 "traddr": "10.0.0.2", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "4420", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:04.082 "hdgst": false, 00:27:04.082 "ddgst": false 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 },{ 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme9", 00:27:04.082 "trtype": "tcp", 00:27:04.082 "traddr": "10.0.0.2", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "4420", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:04.082 "hdgst": false, 00:27:04.082 "ddgst": false 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 },{ 00:27:04.082 "params": { 00:27:04.082 "name": "Nvme10", 00:27:04.082 "trtype": "tcp", 00:27:04.082 "traddr": "10.0.0.2", 00:27:04.082 "adrfam": "ipv4", 00:27:04.082 "trsvcid": "4420", 00:27:04.082 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:04.082 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:04.082 "hdgst": false, 00:27:04.082 "ddgst": false 00:27:04.082 }, 00:27:04.082 "method": "bdev_nvme_attach_controller" 00:27:04.082 }' 00:27:04.082 [2024-07-13 07:14:33.393740] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:04.082 [2024-07-13 07:14:33.393811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599029 ] 00:27:04.082 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.082 [2024-07-13 07:14:33.429246] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:04.082 [2024-07-13 07:14:33.458294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.341 [2024-07-13 07:14:33.546890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.716 Running I/O for 10 seconds... 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.007 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.265 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.265 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:06.265 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:06.265 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:06.523 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:06.523 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:06.523 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:06.523 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:06.523 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.523 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.523 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.523 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:06.523 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:06.523 07:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1599029 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1599029 ']' 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1599029 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1599029 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1599029' 00:27:06.781 killing process with pid 1599029 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1599029 00:27:06.781 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1599029 00:27:06.781 Received shutdown signal, test time was about 1.103057 seconds 00:27:06.781 00:27:06.781 Latency(us) 00:27:06.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.781 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:06.781 Verification LBA range: start 0x0 length 0x400 00:27:06.781 Nvme1n1 : 1.10 232.26 14.52 0.00 0.00 272772.17 23592.96 253211.69 00:27:06.781 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:06.781 Verification LBA range: start 0x0 length 0x400 00:27:06.781 Nvme2n1 : 1.08 237.59 14.85 0.00 0.00 262004.43 20291.89 251658.24 00:27:06.781 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:06.781 Verification LBA range: start 0x0 length 0x400 00:27:06.781 Nvme3n1 : 1.04 245.21 15.33 0.00 0.00 249129.53 17961.72 246997.90 00:27:06.781 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:06.781 Verification LBA range: start 0x0 length 0x400 00:27:06.781 Nvme4n1 : 1.06 240.78 15.05 0.00 0.00 249068.66 18350.08 260978.92 00:27:06.781 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:06.781 Verification LBA range: start 0x0 length 0x400 00:27:06.781 Nvme5n1 : 1.09 236.98 14.81 0.00 0.00 248957.87 1844.72 254765.13 00:27:06.781 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:06.781 Verification LBA range: start 0x0 length 0x400 00:27:06.781 Nvme6n1 : 1.09 235.22 14.70 0.00 0.00 246523.45 21554.06 257872.02 00:27:06.781 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:06.781 Verification LBA range: start 0x0 length 0x400 00:27:06.781 Nvme7n1 : 1.08 236.55 14.78 0.00 0.00 240449.04 17185.00 260978.92 00:27:06.781 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:06.781 Verification LBA range: start 0x0 length 0x400 00:27:06.781 Nvme8n1 : 1.07 239.41 14.96 0.00 0.00 232267.66 20194.80 260978.92 00:27:06.781 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:06.781 Verification LBA range: start 0x0 length 0x400 00:27:06.781 Nvme9n1 : 1.07 185.67 11.60 0.00 0.00 287630.16 5364.24 271853.04 00:27:06.781 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:06.781 Verification LBA range: start 0x0 length 0x400 00:27:06.781 Nvme10n1 : 1.10 233.09 14.57 0.00 0.00 230526.10 22427.88 281173.71 00:27:06.781 =================================================================================================================== 00:27:06.781 Total : 2322.76 145.17 0.00 0.00 251117.40 1844.72 281173.71 00:27:07.039 07:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1598931 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.975 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.975 rmmod nvme_tcp 00:27:07.975 rmmod nvme_fabrics 00:27:08.236 rmmod nvme_keyring 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1598931 ']' 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1598931 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1598931 ']' 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1598931 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1598931 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1598931' 00:27:08.236 killing process with pid 1598931 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1598931 00:27:08.236 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1598931 00:27:08.800 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:08.800 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:08.800 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:08.800 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.800 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:08.800 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.800 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:08.800 07:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.707 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.707 00:27:10.707 real 0m7.803s 00:27:10.707 user 0m23.561s 00:27:10.707 sys 0m1.673s 00:27:10.707 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:10.707 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.707 ************************************ 00:27:10.707 END TEST nvmf_shutdown_tc2 00:27:10.707 ************************************ 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:10.708 ************************************ 00:27:10.708 START TEST nvmf_shutdown_tc3 00:27:10.708 ************************************ 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:10.708 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:10.708 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:10.708 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:10.708 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.708 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:10.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:27:10.969 00:27:10.969 --- 10.0.0.2 ping statistics --- 00:27:10.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.969 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:27:10.969 00:27:10.969 --- 10.0.0.1 ping statistics --- 00:27:10.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.969 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1599960 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1599960 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1599960 ']' 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.969 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:10.969 [2024-07-13 07:14:40.319612] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:10.969 [2024-07-13 07:14:40.319722] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.969 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.969 [2024-07-13 07:14:40.361623] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:10.969 [2024-07-13 07:14:40.390099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:11.229 [2024-07-13 07:14:40.483586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.229 [2024-07-13 07:14:40.483662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.229 [2024-07-13 07:14:40.483676] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:11.229 [2024-07-13 07:14:40.483702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:11.229 [2024-07-13 07:14:40.483713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.229 [2024-07-13 07:14:40.483800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.229 [2024-07-13 07:14:40.483875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.229 [2024-07-13 07:14:40.483927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:11.229 [2024-07-13 07:14:40.483929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:11.229 [2024-07-13 07:14:40.636809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.229 07:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:11.488 Malloc1 00:27:11.488 [2024-07-13 07:14:40.719994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.488 Malloc2 00:27:11.488 Malloc3 00:27:11.488 Malloc4 00:27:11.488 Malloc5 00:27:11.488 Malloc6 00:27:11.746 Malloc7 00:27:11.746 Malloc8 00:27:11.746 Malloc9 00:27:11.746 Malloc10 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1600122 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1600122 /var/tmp/bdevperf.sock 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1600122 ']' 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:11.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.746 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.746 { 00:27:11.746 "params": { 00:27:11.746 "name": "Nvme$subsystem", 00:27:11.746 "trtype": "$TEST_TRANSPORT", 00:27:11.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.746 "adrfam": "ipv4", 00:27:11.747 "trsvcid": "$NVMF_PORT", 00:27:11.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.747 "hdgst": ${hdgst:-false}, 00:27:11.747 "ddgst": ${ddgst:-false} 00:27:11.747 }, 00:27:11.747 "method": "bdev_nvme_attach_controller" 00:27:11.747 } 00:27:11.747 EOF 00:27:11.747 )") 00:27:11.747 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:11.747 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.747 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.747 { 00:27:11.747 "params": { 00:27:11.747 "name": "Nvme$subsystem", 00:27:11.747 "trtype": "$TEST_TRANSPORT", 00:27:11.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.747 "adrfam": "ipv4", 00:27:11.747 "trsvcid": "$NVMF_PORT", 00:27:11.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.747 "hdgst": ${hdgst:-false}, 00:27:11.747 "ddgst": ${ddgst:-false} 00:27:11.747 }, 00:27:11.747 "method": "bdev_nvme_attach_controller" 00:27:11.747 } 00:27:11.747 EOF 00:27:11.747 )") 00:27:11.747 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:11.747 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.747 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.747 { 00:27:11.747 "params": { 00:27:11.747 "name": "Nvme$subsystem", 00:27:11.747 "trtype": "$TEST_TRANSPORT", 00:27:11.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.747 "adrfam": "ipv4", 00:27:11.747 "trsvcid": "$NVMF_PORT", 00:27:11.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.747 "hdgst": ${hdgst:-false}, 00:27:11.747 "ddgst": ${ddgst:-false} 00:27:11.747 }, 00:27:11.747 "method": "bdev_nvme_attach_controller" 00:27:11.747 } 00:27:11.747 EOF 00:27:11.747 )") 00:27:11.747 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.006 { 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme$subsystem", 00:27:12.006 "trtype": "$TEST_TRANSPORT", 00:27:12.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.006 "adrfam": "ipv4", 00:27:12.006 "trsvcid": "$NVMF_PORT", 00:27:12.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.006 "hdgst": ${hdgst:-false}, 00:27:12.006 "ddgst": ${ddgst:-false} 00:27:12.006 }, 00:27:12.006 "method": "bdev_nvme_attach_controller" 00:27:12.006 } 00:27:12.006 EOF 00:27:12.006 )") 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.006 { 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme$subsystem", 00:27:12.006 "trtype": "$TEST_TRANSPORT", 00:27:12.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.006 "adrfam": "ipv4", 00:27:12.006 "trsvcid": "$NVMF_PORT", 00:27:12.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.006 "hdgst": ${hdgst:-false}, 00:27:12.006 "ddgst": ${ddgst:-false} 00:27:12.006 }, 00:27:12.006 "method": "bdev_nvme_attach_controller" 00:27:12.006 } 00:27:12.006 EOF 00:27:12.006 )") 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.006 { 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme$subsystem", 00:27:12.006 "trtype": "$TEST_TRANSPORT", 00:27:12.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.006 "adrfam": "ipv4", 00:27:12.006 "trsvcid": "$NVMF_PORT", 00:27:12.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.006 "hdgst": ${hdgst:-false}, 00:27:12.006 "ddgst": ${ddgst:-false} 00:27:12.006 }, 00:27:12.006 "method": "bdev_nvme_attach_controller" 00:27:12.006 } 00:27:12.006 EOF 00:27:12.006 )") 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.006 { 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme$subsystem", 00:27:12.006 "trtype": "$TEST_TRANSPORT", 00:27:12.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.006 "adrfam": "ipv4", 00:27:12.006 "trsvcid": "$NVMF_PORT", 00:27:12.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.006 "hdgst": ${hdgst:-false}, 00:27:12.006 "ddgst": ${ddgst:-false} 00:27:12.006 }, 00:27:12.006 "method": "bdev_nvme_attach_controller" 00:27:12.006 } 00:27:12.006 EOF 00:27:12.006 )") 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.006 { 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme$subsystem", 00:27:12.006 "trtype": "$TEST_TRANSPORT", 00:27:12.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.006 "adrfam": "ipv4", 00:27:12.006 "trsvcid": "$NVMF_PORT", 00:27:12.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.006 "hdgst": ${hdgst:-false}, 00:27:12.006 "ddgst": ${ddgst:-false} 00:27:12.006 }, 00:27:12.006 "method": "bdev_nvme_attach_controller" 00:27:12.006 } 00:27:12.006 EOF 00:27:12.006 )") 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.006 { 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme$subsystem", 00:27:12.006 "trtype": "$TEST_TRANSPORT", 00:27:12.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.006 "adrfam": "ipv4", 00:27:12.006 "trsvcid": "$NVMF_PORT", 00:27:12.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.006 "hdgst": ${hdgst:-false}, 00:27:12.006 "ddgst": ${ddgst:-false} 00:27:12.006 }, 00:27:12.006 "method": "bdev_nvme_attach_controller" 00:27:12.006 } 00:27:12.006 EOF 00:27:12.006 )") 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.006 { 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme$subsystem", 00:27:12.006 "trtype": "$TEST_TRANSPORT", 00:27:12.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.006 "adrfam": "ipv4", 00:27:12.006 "trsvcid": "$NVMF_PORT", 00:27:12.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.006 "hdgst": ${hdgst:-false}, 00:27:12.006 "ddgst": ${ddgst:-false} 00:27:12.006 }, 00:27:12.006 "method": "bdev_nvme_attach_controller" 00:27:12.006 } 00:27:12.006 EOF 00:27:12.006 )") 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:12.006 07:14:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme1", 00:27:12.006 "trtype": "tcp", 00:27:12.006 "traddr": "10.0.0.2", 00:27:12.006 "adrfam": "ipv4", 00:27:12.006 "trsvcid": "4420", 00:27:12.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:12.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:12.006 "hdgst": false, 00:27:12.006 "ddgst": false 00:27:12.006 }, 00:27:12.006 "method": "bdev_nvme_attach_controller" 00:27:12.006 },{ 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme2", 00:27:12.006 "trtype": "tcp", 00:27:12.006 "traddr": "10.0.0.2", 00:27:12.006 "adrfam": "ipv4", 00:27:12.006 "trsvcid": "4420", 00:27:12.006 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:12.006 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:12.006 "hdgst": false, 00:27:12.006 "ddgst": false 00:27:12.006 }, 00:27:12.006 "method": "bdev_nvme_attach_controller" 00:27:12.006 },{ 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme3", 00:27:12.006 "trtype": "tcp", 00:27:12.006 "traddr": "10.0.0.2", 00:27:12.006 "adrfam": "ipv4", 00:27:12.006 "trsvcid": "4420", 00:27:12.006 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:12.006 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:12.006 "hdgst": false, 00:27:12.006 "ddgst": false 00:27:12.006 }, 00:27:12.006 "method": "bdev_nvme_attach_controller" 00:27:12.006 },{ 00:27:12.006 "params": { 00:27:12.006 "name": "Nvme4", 00:27:12.006 "trtype": "tcp", 00:27:12.007 "traddr": "10.0.0.2", 00:27:12.007 "adrfam": "ipv4", 00:27:12.007 "trsvcid": "4420", 00:27:12.007 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:12.007 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:12.007 "hdgst": false, 00:27:12.007 "ddgst": false 00:27:12.007 }, 00:27:12.007 "method": "bdev_nvme_attach_controller" 00:27:12.007 },{ 00:27:12.007 "params": { 00:27:12.007 "name": "Nvme5", 00:27:12.007 "trtype": "tcp", 00:27:12.007 "traddr": "10.0.0.2", 00:27:12.007 "adrfam": "ipv4", 00:27:12.007 "trsvcid": "4420", 00:27:12.007 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:12.007 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:12.007 "hdgst": false, 00:27:12.007 "ddgst": false 00:27:12.007 }, 00:27:12.007 "method": "bdev_nvme_attach_controller" 00:27:12.007 },{ 00:27:12.007 "params": { 00:27:12.007 "name": "Nvme6", 00:27:12.007 "trtype": "tcp", 00:27:12.007 "traddr": "10.0.0.2", 00:27:12.007 "adrfam": "ipv4", 00:27:12.007 "trsvcid": "4420", 00:27:12.007 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:12.007 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:12.007 "hdgst": false, 00:27:12.007 "ddgst": false 00:27:12.007 }, 00:27:12.007 "method": "bdev_nvme_attach_controller" 00:27:12.007 },{ 00:27:12.007 "params": { 00:27:12.007 "name": "Nvme7", 00:27:12.007 "trtype": "tcp", 00:27:12.007 "traddr": "10.0.0.2", 00:27:12.007 "adrfam": "ipv4", 00:27:12.007 "trsvcid": "4420", 00:27:12.007 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:12.007 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:12.007 "hdgst": false, 00:27:12.007 "ddgst": false 00:27:12.007 }, 00:27:12.007 "method": "bdev_nvme_attach_controller" 00:27:12.007 },{ 00:27:12.007 "params": { 00:27:12.007 "name": "Nvme8", 00:27:12.007 "trtype": "tcp", 00:27:12.007 "traddr": "10.0.0.2", 00:27:12.007 "adrfam": "ipv4", 00:27:12.007 "trsvcid": "4420", 00:27:12.007 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:12.007 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:12.007 "hdgst": false, 00:27:12.007 "ddgst": false 00:27:12.007 }, 00:27:12.007 "method": "bdev_nvme_attach_controller" 00:27:12.007 },{ 00:27:12.007 "params": { 00:27:12.007 "name": "Nvme9", 00:27:12.007 "trtype": "tcp", 00:27:12.007 "traddr": "10.0.0.2", 00:27:12.007 "adrfam": "ipv4", 00:27:12.007 "trsvcid": "4420", 00:27:12.007 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:12.007 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:12.007 "hdgst": false, 00:27:12.007 "ddgst": false 00:27:12.007 }, 00:27:12.007 "method": "bdev_nvme_attach_controller" 00:27:12.007 },{ 00:27:12.007 "params": { 00:27:12.007 "name": "Nvme10", 00:27:12.007 "trtype": "tcp", 00:27:12.007 "traddr": "10.0.0.2", 00:27:12.007 "adrfam": "ipv4", 00:27:12.007 "trsvcid": "4420", 00:27:12.007 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:12.007 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:12.007 "hdgst": false, 00:27:12.007 "ddgst": false 00:27:12.007 }, 00:27:12.007 "method": "bdev_nvme_attach_controller" 00:27:12.007 }' 00:27:12.007 [2024-07-13 07:14:41.235805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:12.007 [2024-07-13 07:14:41.235914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600122 ] 00:27:12.007 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.007 [2024-07-13 07:14:41.271032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:12.007 [2024-07-13 07:14:41.300117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.007 [2024-07-13 07:14:41.386776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.913 Running I/O for 10 seconds... 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:13.913 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:14.172 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:14.172 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:14.172 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:14.172 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:14.172 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.172 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.172 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.172 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:14.172 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:14.172 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:14.430 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1599960 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1599960 ']' 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1599960 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.431 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1599960 00:27:14.705 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:14.705 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:14.705 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1599960' 00:27:14.705 killing process with pid 1599960 00:27:14.705 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1599960 00:27:14.705 07:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1599960 00:27:14.705 [2024-07-13 07:14:43.898442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.705 [2024-07-13 07:14:43.898947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.898960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.898972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.898985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.898998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.899360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec44a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.900999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.901668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1f40 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.904132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.904177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.904193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.904206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.904219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.904235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.904248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.904260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.904273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.706 [2024-07-13 07:14:43.904286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.904990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.905003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.905016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.905028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec28a0 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.707 [2024-07-13 07:14:43.906639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.906894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2d40 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.908941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3200 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.708 [2024-07-13 07:14:43.910420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.709 [2024-07-13 07:14:43.910960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.910972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.910985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.910997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.911140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec36c0 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.912984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3b60 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.913996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.914010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.914023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.914035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.914048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.914060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.914073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.710 [2024-07-13 07:14:43.914085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.914610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4000 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.916710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.916751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.916779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.916794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.916808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.916822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.916837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.916861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.916884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2613600 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.916945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.916967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.916983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.916997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2614e80 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.917125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2614320 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.917298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3f610 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.917465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2486740 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.917639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c010 is same with the state(5) to be set 00:27:14.711 [2024-07-13 07:14:43.917807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.711 [2024-07-13 07:14:43.917864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.711 [2024-07-13 07:14:43.917890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.917905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.917919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.917933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.917947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2615a90 is same with the state(5) to be set 00:27:14.712 [2024-07-13 07:14:43.917993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2449f10 is same with the state(5) to be set 00:27:14.712 [2024-07-13 07:14:43.918177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x261d140 is same with the state(5) to be set 00:27:14.712 [2024-07-13 07:14:43.918343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.712 [2024-07-13 07:14:43.918449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b40 is same with the state(5) to be set 00:27:14.712 [2024-07-13 07:14:43.918524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.918982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.918998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.712 [2024-07-13 07:14:43.919494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.712 [2024-07-13 07:14:43.919510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.919970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.919984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920633] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2503940 was disconnected and freed. reset controller. 00:27:14.713 [2024-07-13 07:14:43.920798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.920978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.920993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.921009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.921023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.921040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.921055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.921071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.921086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.713 [2024-07-13 07:14:43.921103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.713 [2024-07-13 07:14:43.921118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.921967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.921981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.714 [2024-07-13 07:14:43.922726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.714 [2024-07-13 07:14:43.922742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.922758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.922773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.922793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.922809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.922826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.922841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.922870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.922886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.922902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25837c0 is same with the state(5) to be set 00:27:14.715 [2024-07-13 07:14:43.922975] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25837c0 was disconnected and freed. reset controller. 00:27:14.715 [2024-07-13 07:14:43.926226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.715 [2024-07-13 07:14:43.926272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:14.715 [2024-07-13 07:14:43.926303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x261d140 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.926327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2449f10 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.926872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2613600 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.926915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2614e80 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.926951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2614320 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.926983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3f610 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.927014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2486740 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.927046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c010 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.927077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2615a90 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.927110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2492b40 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.927789] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:14.715 [2024-07-13 07:14:43.927896] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:14.715 [2024-07-13 07:14:43.927970] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:14.715 [2024-07-13 07:14:43.928044] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:14.715 [2024-07-13 07:14:43.928120] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:14.715 [2024-07-13 07:14:43.928203] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:14.715 [2024-07-13 07:14:43.928285] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:14.715 [2024-07-13 07:14:43.928491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-07-13 07:14:43.928527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2449f10 with addr=10.0.0.2, port=4420 00:27:14.715 [2024-07-13 07:14:43.928546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2449f10 is same with the state(5) to be set 00:27:14.715 [2024-07-13 07:14:43.928687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-07-13 07:14:43.928714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x261d140 with addr=10.0.0.2, port=4420 00:27:14.715 [2024-07-13 07:14:43.928730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x261d140 is same with the state(5) to be set 00:27:14.715 [2024-07-13 07:14:43.928906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2449f10 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.928936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x261d140 (9): Bad file descriptor 00:27:14.715 [2024-07-13 07:14:43.929010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.715 [2024-07-13 07:14:43.929904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.715 [2024-07-13 07:14:43.929920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.929935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.929952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.929968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.929985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.930978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.930994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.931013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.931030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.931045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.931061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.931076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.931093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.931107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.931123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2582350 is same with the state(5) to be set 00:27:14.716 [2024-07-13 07:14:43.931213] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2582350 was disconnected and freed. reset controller. 00:27:14.716 [2024-07-13 07:14:43.931291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.716 [2024-07-13 07:14:43.931312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.716 [2024-07-13 07:14:43.931328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.716 [2024-07-13 07:14:43.931350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:14.716 [2024-07-13 07:14:43.931365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:14.716 [2024-07-13 07:14:43.931378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:14.716 [2024-07-13 07:14:43.932579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.716 [2024-07-13 07:14:43.932603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.716 [2024-07-13 07:14:43.932617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:14.716 [2024-07-13 07:14:43.932834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-07-13 07:14:43.932880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2615a90 with addr=10.0.0.2, port=4420 00:27:14.716 [2024-07-13 07:14:43.932900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2615a90 is same with the state(5) to be set 00:27:14.716 [2024-07-13 07:14:43.933224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2615a90 (9): Bad file descriptor 00:27:14.716 [2024-07-13 07:14:43.933296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:14.716 [2024-07-13 07:14:43.933317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:14.716 [2024-07-13 07:14:43.933332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:14.716 [2024-07-13 07:14:43.933398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.716 [2024-07-13 07:14:43.937071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.716 [2024-07-13 07:14:43.937101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.716 [2024-07-13 07:14:43.937145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.937970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.937987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.717 [2024-07-13 07:14:43.938427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.717 [2024-07-13 07:14:43.938442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.938972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.938987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.939002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.939017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.939033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.939048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.939064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.939078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.939095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.939110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.939126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.939144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.939161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2444640 is same with the state(5) to be set 00:27:14.718 [2024-07-13 07:14:43.940427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.940973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.940989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.941005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.941022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.941037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.941053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.941067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.941083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.941098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.941115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.941131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.941147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.941162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.941179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.941194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.941210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.941229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.718 [2024-07-13 07:14:43.941246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.718 [2024-07-13 07:14:43.941261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.941978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.941992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.942473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.942488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2445ab0 is same with the state(5) to be set 00:27:14.719 [2024-07-13 07:14:43.943722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.943745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.943766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.943782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.943799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.943814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.943831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.943845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.943862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.943884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.943901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.719 [2024-07-13 07:14:43.943916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.719 [2024-07-13 07:14:43.943933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.943947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.943964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.943979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.943995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.944976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.944992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.720 [2024-07-13 07:14:43.945445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.720 [2024-07-13 07:14:43.945461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.945475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.945492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.945506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.945523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.945538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.945554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.945568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.945585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.945600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.945616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.945631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.945650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.945666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.945682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.945696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.945712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.945727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.945742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.945757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.945772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2584380 is same with the state(5) to be set 00:27:14.721 [2024-07-13 07:14:43.947043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.721 [2024-07-13 07:14:43.947933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.721 [2024-07-13 07:14:43.947948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.947964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.947979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.947995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.948973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.948989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.949005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.949020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.949036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.949050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.949066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.949081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.949095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25857d0 is same with the state(5) to be set 00:27:14.722 [2024-07-13 07:14:43.950331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.950354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.950377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.950394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.950411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.950426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.950443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.950458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.950475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.950490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.950506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.950521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.950537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.722 [2024-07-13 07:14:43.950557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.722 [2024-07-13 07:14:43.950574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.950980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.950997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.951982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.951997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.952013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.952028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.952045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.952060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.952076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.952092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.952118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.952133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.952149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.952168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.723 [2024-07-13 07:14:43.952185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.723 [2024-07-13 07:14:43.952200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.952216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.952230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.952247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.952262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.952278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.952292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.952309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.952324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.952339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.952354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.952370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.952384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.952399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2586c80 is same with the state(5) to be set 00:27:14.724 [2024-07-13 07:14:43.953643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.953666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.953689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.953705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.953722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.953738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.953754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.953769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.953785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.953804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.953821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.953836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.953852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.953879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.953897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.953913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.953929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.953944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.953960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.953975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.953991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.724 [2024-07-13 07:14:43.954895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.724 [2024-07-13 07:14:43.954913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.954929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.954945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.954959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.954975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.954990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.955675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.955690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2587f80 is same with the state(5) to be set 00:27:14.725 [2024-07-13 07:14:43.958022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.958051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.958085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.958101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.958118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.958137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.958155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.958170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.958188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.958203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.958220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.958235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.958251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.958266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.958282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.958297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.958313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.958327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.725 [2024-07-13 07:14:43.958343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.725 [2024-07-13 07:14:43.958358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.958970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.958987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.726 [2024-07-13 07:14:43.959834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.726 [2024-07-13 07:14:43.959849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.727 [2024-07-13 07:14:43.959871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.727 [2024-07-13 07:14:43.959887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.727 [2024-07-13 07:14:43.959903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.727 [2024-07-13 07:14:43.959918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.727 [2024-07-13 07:14:43.959934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.727 [2024-07-13 07:14:43.959948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.727 [2024-07-13 07:14:43.959964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.727 [2024-07-13 07:14:43.959979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.727 [2024-07-13 07:14:43.959995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.727 [2024-07-13 07:14:43.960009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.727 [2024-07-13 07:14:43.960025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.727 [2024-07-13 07:14:43.960040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.727 [2024-07-13 07:14:43.960057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.727 [2024-07-13 07:14:43.960071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.727 [2024-07-13 07:14:43.960086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2589420 is same with the state(5) to be set 00:27:14.727 [2024-07-13 07:14:43.962070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:14.727 [2024-07-13 07:14:43.962113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:14.727 [2024-07-13 07:14:43.962137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:14.727 [2024-07-13 07:14:43.962156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:14.727 [2024-07-13 07:14:43.962263] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.727 [2024-07-13 07:14:43.962289] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.727 [2024-07-13 07:14:43.962309] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.727 [2024-07-13 07:14:43.962333] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.727 [2024-07-13 07:14:43.962356] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.727 [2024-07-13 07:14:43.962455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:14.727 [2024-07-13 07:14:43.962481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:14.727 [2024-07-13 07:14:43.962500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:14.727 task offset: 24576 on job bdev=Nvme1n1 fails 00:27:14.727 00:27:14.727 Latency(us) 00:27:14.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.727 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.727 Job: Nvme1n1 ended in about 0.92 seconds with error 00:27:14.727 Verification LBA range: start 0x0 length 0x400 00:27:14.727 Nvme1n1 : 0.92 209.39 13.09 69.80 0.00 226692.93 19418.07 250104.79 00:27:14.727 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.727 Job: Nvme2n1 ended in about 0.92 seconds with error 00:27:14.727 Verification LBA range: start 0x0 length 0x400 00:27:14.727 Nvme2n1 : 0.92 207.60 12.98 69.20 0.00 224054.99 19126.80 254765.13 00:27:14.727 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.727 Job: Nvme3n1 ended in about 0.92 seconds with error 00:27:14.727 Verification LBA range: start 0x0 length 0x400 00:27:14.727 Nvme3n1 : 0.92 209.12 13.07 69.71 0.00 217811.91 8835.22 279620.27 00:27:14.727 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.727 Job: Nvme4n1 ended in about 0.93 seconds with error 00:27:14.727 Verification LBA range: start 0x0 length 0x400 00:27:14.727 Nvme4n1 : 0.93 205.87 12.87 68.62 0.00 216963.41 18252.99 237677.23 00:27:14.727 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.727 Job: Nvme5n1 ended in about 0.94 seconds with error 00:27:14.727 Verification LBA range: start 0x0 length 0x400 00:27:14.727 Nvme5n1 : 0.94 136.77 8.55 68.38 0.00 284563.53 22622.06 276513.37 00:27:14.727 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.727 Job: Nvme6n1 ended in about 0.94 seconds with error 00:27:14.727 Verification LBA range: start 0x0 length 0x400 00:27:14.727 Nvme6n1 : 0.94 136.29 8.52 68.14 0.00 279741.63 19903.53 257872.02 00:27:14.727 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.727 Job: Nvme7n1 ended in about 0.94 seconds with error 00:27:14.727 Verification LBA range: start 0x0 length 0x400 00:27:14.727 Nvme7n1 : 0.94 135.81 8.49 67.90 0.00 274932.37 20874.43 281173.71 00:27:14.727 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.727 Job: Nvme8n1 ended in about 0.95 seconds with error 00:27:14.727 Verification LBA range: start 0x0 length 0x400 00:27:14.727 Nvme8n1 : 0.95 135.33 8.46 67.67 0.00 270104.40 20291.89 236123.78 00:27:14.727 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.727 Job: Nvme9n1 ended in about 0.95 seconds with error 00:27:14.727 Verification LBA range: start 0x0 length 0x400 00:27:14.727 Nvme9n1 : 0.95 134.87 8.43 67.43 0.00 265436.22 20291.89 295154.73 00:27:14.727 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.727 Job: Nvme10n1 ended in about 0.95 seconds with error 00:27:14.727 Verification LBA range: start 0x0 length 0x400 00:27:14.727 Nvme10n1 : 0.95 134.25 8.39 67.12 0.00 261242.63 19126.80 264085.81 00:27:14.727 =================================================================================================================== 00:27:14.727 Total : 1645.29 102.83 683.98 0.00 248533.98 8835.22 295154.73 00:27:14.727 [2024-07-13 07:14:43.989547] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:14.727 [2024-07-13 07:14:43.989637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:14.727 [2024-07-13 07:14:43.989958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-07-13 07:14:43.989994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x261d140 with addr=10.0.0.2, port=4420 00:27:14.727 [2024-07-13 07:14:43.990015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x261d140 is same with the state(5) to be set 00:27:14.727 [2024-07-13 07:14:43.990149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-07-13 07:14:43.990176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2486740 with addr=10.0.0.2, port=4420 00:27:14.727 [2024-07-13 07:14:43.990193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2486740 is same with the state(5) to be set 00:27:14.727 [2024-07-13 07:14:43.990368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-07-13 07:14:43.990395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c010 with addr=10.0.0.2, port=4420 00:27:14.727 [2024-07-13 07:14:43.990411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c010 is same with the state(5) to be set 00:27:14.727 [2024-07-13 07:14:43.990552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-07-13 07:14:43.990578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3f610 with addr=10.0.0.2, port=4420 00:27:14.727 [2024-07-13 07:14:43.990595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3f610 is same with the state(5) to be set 00:27:14.727 [2024-07-13 07:14:43.992419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.727 [2024-07-13 07:14:43.992619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-07-13 07:14:43.992649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2492b40 with addr=10.0.0.2, port=4420 00:27:14.727 [2024-07-13 07:14:43.992667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b40 is same with the state(5) to be set 00:27:14.727 [2024-07-13 07:14:43.992785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-07-13 07:14:43.992811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2614320 with addr=10.0.0.2, port=4420 00:27:14.727 [2024-07-13 07:14:43.992828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2614320 is same with the state(5) to be set 00:27:14.727 [2024-07-13 07:14:43.992948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-07-13 07:14:43.992975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2613600 with addr=10.0.0.2, port=4420 00:27:14.727 [2024-07-13 07:14:43.992992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2613600 is same with the state(5) to be set 00:27:14.727 [2024-07-13 07:14:43.993119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-07-13 07:14:43.993145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2614e80 with addr=10.0.0.2, port=4420 00:27:14.727 [2024-07-13 07:14:43.993172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2614e80 is same with the state(5) to be set 00:27:14.727 [2024-07-13 07:14:43.993199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x261d140 (9): Bad file descriptor 00:27:14.727 [2024-07-13 07:14:43.993221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2486740 (9): Bad file descriptor 00:27:14.727 [2024-07-13 07:14:43.993239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c010 (9): Bad file descriptor 00:27:14.727 [2024-07-13 07:14:43.993257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3f610 (9): Bad file descriptor 00:27:14.727 [2024-07-13 07:14:43.993300] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.727 [2024-07-13 07:14:43.993331] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.727 [2024-07-13 07:14:43.993353] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.727 [2024-07-13 07:14:43.993373] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.727 [2024-07-13 07:14:43.993392] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.727 [2024-07-13 07:14:43.993473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:14.727 [2024-07-13 07:14:43.993707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-07-13 07:14:43.993736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2449f10 with addr=10.0.0.2, port=4420 00:27:14.727 [2024-07-13 07:14:43.993753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2449f10 is same with the state(5) to be set 00:27:14.727 [2024-07-13 07:14:43.993772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2492b40 (9): Bad file descriptor 00:27:14.727 [2024-07-13 07:14:43.993792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2614320 (9): Bad file descriptor 00:27:14.727 [2024-07-13 07:14:43.993811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2613600 (9): Bad file descriptor 00:27:14.727 [2024-07-13 07:14:43.993829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2614e80 (9): Bad file descriptor 00:27:14.727 [2024-07-13 07:14:43.993845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:14.727 [2024-07-13 07:14:43.993859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:14.727 [2024-07-13 07:14:43.993884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:14.728 [2024-07-13 07:14:43.993905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:14.728 [2024-07-13 07:14:43.993920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:14.728 [2024-07-13 07:14:43.993933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:14.728 [2024-07-13 07:14:43.993950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:14.728 [2024-07-13 07:14:43.993964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:14.728 [2024-07-13 07:14:43.993977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:14.728 [2024-07-13 07:14:43.993996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:14.728 [2024-07-13 07:14:43.994010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:14.728 [2024-07-13 07:14:43.994029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:14.728 [2024-07-13 07:14:43.994130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.728 [2024-07-13 07:14:43.994151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.728 [2024-07-13 07:14:43.994164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.728 [2024-07-13 07:14:43.994176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.728 [2024-07-13 07:14:43.994307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-07-13 07:14:43.994334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2615a90 with addr=10.0.0.2, port=4420 00:27:14.728 [2024-07-13 07:14:43.994350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2615a90 is same with the state(5) to be set 00:27:14.728 [2024-07-13 07:14:43.994368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2449f10 (9): Bad file descriptor 00:27:14.728 [2024-07-13 07:14:43.994386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:14.728 [2024-07-13 07:14:43.994400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:14.728 [2024-07-13 07:14:43.994413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:14.728 [2024-07-13 07:14:43.994431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:14.728 [2024-07-13 07:14:43.994445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:14.728 [2024-07-13 07:14:43.994459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:14.728 [2024-07-13 07:14:43.994475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:14.728 [2024-07-13 07:14:43.994489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:14.728 [2024-07-13 07:14:43.994502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:14.728 [2024-07-13 07:14:43.994518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:14.728 [2024-07-13 07:14:43.994532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:14.728 [2024-07-13 07:14:43.994545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:14.728 [2024-07-13 07:14:43.994583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.728 [2024-07-13 07:14:43.994602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.728 [2024-07-13 07:14:43.994614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.728 [2024-07-13 07:14:43.994627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.728 [2024-07-13 07:14:43.994642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2615a90 (9): Bad file descriptor 00:27:14.728 [2024-07-13 07:14:43.994659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.728 [2024-07-13 07:14:43.994673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.728 [2024-07-13 07:14:43.994686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.728 [2024-07-13 07:14:43.994727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.728 [2024-07-13 07:14:43.994747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:14.728 [2024-07-13 07:14:43.994766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:14.728 [2024-07-13 07:14:43.994780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:14.728 [2024-07-13 07:14:43.994815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.986 07:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:14.986 07:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:16.369 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1600122 00:27:16.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1600122) - No such process 00:27:16.369 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:16.369 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:16.369 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:16.369 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:16.369 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:16.369 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:16.369 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.370 rmmod nvme_tcp 00:27:16.370 rmmod nvme_fabrics 00:27:16.370 rmmod nvme_keyring 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.370 07:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.281 07:14:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:18.281 00:27:18.281 real 0m7.441s 00:27:18.281 user 0m17.883s 00:27:18.281 sys 0m1.557s 00:27:18.281 07:14:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:18.282 07:14:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.282 ************************************ 00:27:18.282 END TEST nvmf_shutdown_tc3 00:27:18.282 ************************************ 00:27:18.282 07:14:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:18.282 07:14:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:18.282 00:27:18.282 real 0m27.330s 00:27:18.282 user 1m15.889s 00:27:18.282 sys 0m6.615s 00:27:18.282 07:14:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:18.282 07:14:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:18.282 ************************************ 00:27:18.282 END TEST nvmf_shutdown 00:27:18.282 ************************************ 00:27:18.282 07:14:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:18.282 07:14:47 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:18.282 07:14:47 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.282 07:14:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.282 07:14:47 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:18.282 07:14:47 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.282 07:14:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.282 07:14:47 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:18.282 07:14:47 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:18.282 07:14:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:18.282 07:14:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.282 07:14:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.282 ************************************ 00:27:18.282 START TEST nvmf_multicontroller 00:27:18.282 ************************************ 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:18.282 * Looking for test storage... 00:27:18.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:18.282 07:14:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:20.186 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:20.187 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:20.187 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:20.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:20.187 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.187 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:20.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:27:20.446 00:27:20.446 --- 10.0.0.2 ping statistics --- 00:27:20.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.446 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:27:20.446 00:27:20.446 --- 10.0.0.1 ping statistics --- 00:27:20.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.446 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1602631 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1602631 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1602631 ']' 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:20.446 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.446 [2024-07-13 07:14:49.731002] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:20.446 [2024-07-13 07:14:49.731089] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.446 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.446 [2024-07-13 07:14:49.767580] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:20.446 [2024-07-13 07:14:49.794714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:20.446 [2024-07-13 07:14:49.878835] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.446 [2024-07-13 07:14:49.878897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.446 [2024-07-13 07:14:49.878922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.446 [2024-07-13 07:14:49.878933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.446 [2024-07-13 07:14:49.878943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.446 [2024-07-13 07:14:49.879032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:20.446 [2024-07-13 07:14:49.879107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:20.446 [2024-07-13 07:14:49.879110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.705 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:20.705 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:20.705 07:14:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:20.705 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:20.705 07:14:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 [2024-07-13 07:14:50.010917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 Malloc0 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 [2024-07-13 07:14:50.066100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 [2024-07-13 07:14:50.074018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 Malloc1 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:20.705 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1602655 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1602655 /var/tmp/bdevperf.sock 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1602655 ']' 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:20.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:20.706 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.278 NVMe0n1 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.278 1 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.278 request: 00:27:21.278 { 00:27:21.278 "name": "NVMe0", 00:27:21.278 "trtype": "tcp", 00:27:21.278 "traddr": "10.0.0.2", 00:27:21.278 "adrfam": "ipv4", 00:27:21.278 "trsvcid": "4420", 00:27:21.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.278 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:21.278 "hostaddr": "10.0.0.2", 00:27:21.278 "hostsvcid": "60000", 00:27:21.278 "prchk_reftag": false, 00:27:21.278 "prchk_guard": false, 00:27:21.278 "hdgst": false, 00:27:21.278 "ddgst": false, 00:27:21.278 "method": "bdev_nvme_attach_controller", 00:27:21.278 "req_id": 1 00:27:21.278 } 00:27:21.278 Got JSON-RPC error response 00:27:21.278 response: 00:27:21.278 { 00:27:21.278 "code": -114, 00:27:21.278 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:21.278 } 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.278 request: 00:27:21.278 { 00:27:21.278 "name": "NVMe0", 00:27:21.278 "trtype": "tcp", 00:27:21.278 "traddr": "10.0.0.2", 00:27:21.278 "adrfam": "ipv4", 00:27:21.278 "trsvcid": "4420", 00:27:21.278 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:21.278 "hostaddr": "10.0.0.2", 00:27:21.278 "hostsvcid": "60000", 00:27:21.278 "prchk_reftag": false, 00:27:21.278 "prchk_guard": false, 00:27:21.278 "hdgst": false, 00:27:21.278 "ddgst": false, 00:27:21.278 "method": "bdev_nvme_attach_controller", 00:27:21.278 "req_id": 1 00:27:21.278 } 00:27:21.278 Got JSON-RPC error response 00:27:21.278 response: 00:27:21.278 { 00:27:21.278 "code": -114, 00:27:21.278 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:21.278 } 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.278 request: 00:27:21.278 { 00:27:21.278 "name": "NVMe0", 00:27:21.278 "trtype": "tcp", 00:27:21.278 "traddr": "10.0.0.2", 00:27:21.278 "adrfam": "ipv4", 00:27:21.278 "trsvcid": "4420", 00:27:21.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.278 "hostaddr": "10.0.0.2", 00:27:21.278 "hostsvcid": "60000", 00:27:21.278 "prchk_reftag": false, 00:27:21.278 "prchk_guard": false, 00:27:21.278 "hdgst": false, 00:27:21.278 "ddgst": false, 00:27:21.278 "multipath": "disable", 00:27:21.278 "method": "bdev_nvme_attach_controller", 00:27:21.278 "req_id": 1 00:27:21.278 } 00:27:21.278 Got JSON-RPC error response 00:27:21.278 response: 00:27:21.278 { 00:27:21.278 "code": -114, 00:27:21.278 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:21.278 } 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:21.278 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.279 request: 00:27:21.279 { 00:27:21.279 "name": "NVMe0", 00:27:21.279 "trtype": "tcp", 00:27:21.279 "traddr": "10.0.0.2", 00:27:21.279 "adrfam": "ipv4", 00:27:21.279 "trsvcid": "4420", 00:27:21.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.279 "hostaddr": "10.0.0.2", 00:27:21.279 "hostsvcid": "60000", 00:27:21.279 "prchk_reftag": false, 00:27:21.279 "prchk_guard": false, 00:27:21.279 "hdgst": false, 00:27:21.279 "ddgst": false, 00:27:21.279 "multipath": "failover", 00:27:21.279 "method": "bdev_nvme_attach_controller", 00:27:21.279 "req_id": 1 00:27:21.279 } 00:27:21.279 Got JSON-RPC error response 00:27:21.279 response: 00:27:21.279 { 00:27:21.279 "code": -114, 00:27:21.279 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:21.279 } 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.279 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.537 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.537 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:21.537 07:14:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:22.913 0 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1602655 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1602655 ']' 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1602655 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1602655 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1602655' 00:27:22.914 killing process with pid 1602655 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1602655 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1602655 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:22.914 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:22.914 [2024-07-13 07:14:50.175262] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:22.914 [2024-07-13 07:14:50.175360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602655 ] 00:27:22.914 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.914 [2024-07-13 07:14:50.209995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:22.914 [2024-07-13 07:14:50.239984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.914 [2024-07-13 07:14:50.328290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.914 [2024-07-13 07:14:50.914460] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name e024122f-efe6-424c-8db4-e7af4370414e already exists 00:27:22.914 [2024-07-13 07:14:50.914504] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:e024122f-efe6-424c-8db4-e7af4370414e alias for bdev NVMe1n1 00:27:22.914 [2024-07-13 07:14:50.914519] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:22.914 Running I/O for 1 seconds... 00:27:22.914 00:27:22.914 Latency(us) 00:27:22.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.914 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:22.914 NVMe0n1 : 1.01 18514.15 72.32 0.00 0.00 6895.06 4126.34 16602.45 00:27:22.914 =================================================================================================================== 00:27:22.914 Total : 18514.15 72.32 0.00 0.00 6895.06 4126.34 16602.45 00:27:22.914 Received shutdown signal, test time was about 1.000000 seconds 00:27:22.914 00:27:22.914 Latency(us) 00:27:22.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.914 =================================================================================================================== 00:27:22.914 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:22.914 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.914 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.914 rmmod nvme_tcp 00:27:22.914 rmmod nvme_fabrics 00:27:22.914 rmmod nvme_keyring 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1602631 ']' 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1602631 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1602631 ']' 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1602631 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1602631 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1602631' 00:27:23.175 killing process with pid 1602631 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1602631 00:27:23.175 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1602631 00:27:23.434 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:23.434 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:23.434 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:23.434 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.434 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.434 07:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.434 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.434 07:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.341 07:14:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:25.341 00:27:25.341 real 0m7.135s 00:27:25.341 user 0m11.142s 00:27:25.341 sys 0m2.221s 00:27:25.341 07:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:25.341 07:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.341 ************************************ 00:27:25.341 END TEST nvmf_multicontroller 00:27:25.341 ************************************ 00:27:25.341 07:14:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:25.341 07:14:54 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:25.341 07:14:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:25.341 07:14:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:25.341 07:14:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:25.601 ************************************ 00:27:25.601 START TEST nvmf_aer 00:27:25.601 ************************************ 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:25.601 * Looking for test storage... 00:27:25.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.601 07:14:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.602 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:25.602 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:25.602 07:14:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:25.602 07:14:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:27.506 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:27.506 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:27.506 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:27.506 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:27.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:27:27.506 00:27:27.506 --- 10.0.0.2 ping statistics --- 00:27:27.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.506 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:27:27.506 00:27:27.506 --- 10.0.0.1 ping statistics --- 00:27:27.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.506 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:27.506 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1604857 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1604857 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1604857 ']' 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.507 07:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:27.507 [2024-07-13 07:14:56.931891] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:27.507 [2024-07-13 07:14:56.931977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.765 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.765 [2024-07-13 07:14:56.970289] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:27.765 [2024-07-13 07:14:57.002347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.765 [2024-07-13 07:14:57.094224] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.765 [2024-07-13 07:14:57.094288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.765 [2024-07-13 07:14:57.094305] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.765 [2024-07-13 07:14:57.094319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.765 [2024-07-13 07:14:57.094334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.765 [2024-07-13 07:14:57.094417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.765 [2024-07-13 07:14:57.094485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.765 [2024-07-13 07:14:57.094579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:27.765 [2024-07-13 07:14:57.094581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.024 [2024-07-13 07:14:57.251795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.024 Malloc0 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.024 [2024-07-13 07:14:57.305062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.024 [ 00:27:28.024 { 00:27:28.024 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:28.024 "subtype": "Discovery", 00:27:28.024 "listen_addresses": [], 00:27:28.024 "allow_any_host": true, 00:27:28.024 "hosts": [] 00:27:28.024 }, 00:27:28.024 { 00:27:28.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:28.024 "subtype": "NVMe", 00:27:28.024 "listen_addresses": [ 00:27:28.024 { 00:27:28.024 "trtype": "TCP", 00:27:28.024 "adrfam": "IPv4", 00:27:28.024 "traddr": "10.0.0.2", 00:27:28.024 "trsvcid": "4420" 00:27:28.024 } 00:27:28.024 ], 00:27:28.024 "allow_any_host": true, 00:27:28.024 "hosts": [], 00:27:28.024 "serial_number": "SPDK00000000000001", 00:27:28.024 "model_number": "SPDK bdev Controller", 00:27:28.024 "max_namespaces": 2, 00:27:28.024 "min_cntlid": 1, 00:27:28.024 "max_cntlid": 65519, 00:27:28.024 "namespaces": [ 00:27:28.024 { 00:27:28.024 "nsid": 1, 00:27:28.024 "bdev_name": "Malloc0", 00:27:28.024 "name": "Malloc0", 00:27:28.024 "nguid": "01EC0AE69CE24791831F4FBBF08639E1", 00:27:28.024 "uuid": "01ec0ae6-9ce2-4791-831f-4fbbf08639e1" 00:27:28.024 } 00:27:28.024 ] 00:27:28.024 } 00:27:28.024 ] 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1604891 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:28.024 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:28.024 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.283 Malloc1 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.283 Asynchronous Event Request test 00:27:28.283 Attaching to 10.0.0.2 00:27:28.283 Attached to 10.0.0.2 00:27:28.283 Registering asynchronous event callbacks... 00:27:28.283 Starting namespace attribute notice tests for all controllers... 00:27:28.283 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:28.283 aer_cb - Changed Namespace 00:27:28.283 Cleaning up... 00:27:28.283 [ 00:27:28.283 { 00:27:28.283 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:28.283 "subtype": "Discovery", 00:27:28.283 "listen_addresses": [], 00:27:28.283 "allow_any_host": true, 00:27:28.283 "hosts": [] 00:27:28.283 }, 00:27:28.283 { 00:27:28.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:28.283 "subtype": "NVMe", 00:27:28.283 "listen_addresses": [ 00:27:28.283 { 00:27:28.283 "trtype": "TCP", 00:27:28.283 "adrfam": "IPv4", 00:27:28.283 "traddr": "10.0.0.2", 00:27:28.283 "trsvcid": "4420" 00:27:28.283 } 00:27:28.283 ], 00:27:28.283 "allow_any_host": true, 00:27:28.283 "hosts": [], 00:27:28.283 "serial_number": "SPDK00000000000001", 00:27:28.283 "model_number": "SPDK bdev Controller", 00:27:28.283 "max_namespaces": 2, 00:27:28.283 "min_cntlid": 1, 00:27:28.283 "max_cntlid": 65519, 00:27:28.283 "namespaces": [ 00:27:28.283 { 00:27:28.283 "nsid": 1, 00:27:28.283 "bdev_name": "Malloc0", 00:27:28.283 "name": "Malloc0", 00:27:28.283 "nguid": "01EC0AE69CE24791831F4FBBF08639E1", 00:27:28.283 "uuid": "01ec0ae6-9ce2-4791-831f-4fbbf08639e1" 00:27:28.283 }, 00:27:28.283 { 00:27:28.283 "nsid": 2, 00:27:28.283 "bdev_name": "Malloc1", 00:27:28.283 "name": "Malloc1", 00:27:28.283 "nguid": "1AEFE16E28F34826A5A8FB1DDD35C7E5", 00:27:28.283 "uuid": "1aefe16e-28f3-4826-a5a8-fb1ddd35c7e5" 00:27:28.283 } 00:27:28.283 ] 00:27:28.283 } 00:27:28.283 ] 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1604891 00:27:28.283 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:28.284 rmmod nvme_tcp 00:27:28.284 rmmod nvme_fabrics 00:27:28.284 rmmod nvme_keyring 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1604857 ']' 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1604857 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1604857 ']' 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1604857 00:27:28.284 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1604857 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1604857' 00:27:28.542 killing process with pid 1604857 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1604857 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1604857 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.542 07:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.074 07:15:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.074 00:27:31.074 real 0m5.213s 00:27:31.074 user 0m4.111s 00:27:31.074 sys 0m1.787s 00:27:31.074 07:15:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.074 07:15:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 ************************************ 00:27:31.074 END TEST nvmf_aer 00:27:31.074 ************************************ 00:27:31.074 07:15:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:31.074 07:15:00 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:31.074 07:15:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:31.074 07:15:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.074 07:15:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 ************************************ 00:27:31.074 START TEST nvmf_async_init 00:27:31.074 ************************************ 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:31.074 * Looking for test storage... 00:27:31.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=df7f0dd167a4465f943377b61472c982 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.074 07:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:32.973 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:32.973 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:32.973 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.973 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:32.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:32.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:27:32.974 00:27:32.974 --- 10.0.0.2 ping statistics --- 00:27:32.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.974 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:27:32.974 00:27:32.974 --- 10.0.0.1 ping statistics --- 00:27:32.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.974 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1607048 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1607048 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1607048 ']' 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:32.974 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.974 [2024-07-13 07:15:02.235843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:32.974 [2024-07-13 07:15:02.235953] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.974 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.974 [2024-07-13 07:15:02.272559] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:32.974 [2024-07-13 07:15:02.303771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.974 [2024-07-13 07:15:02.393024] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.974 [2024-07-13 07:15:02.393097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.974 [2024-07-13 07:15:02.393114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.974 [2024-07-13 07:15:02.393127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.974 [2024-07-13 07:15:02.393139] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.974 [2024-07-13 07:15:02.393184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 [2024-07-13 07:15:02.539295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 null0 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g df7f0dd167a4465f943377b61472c982 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 [2024-07-13 07:15:02.579520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.233 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.492 nvme0n1 00:27:33.492 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.492 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:33.492 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.492 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.492 [ 00:27:33.492 { 00:27:33.492 "name": "nvme0n1", 00:27:33.492 "aliases": [ 00:27:33.492 "df7f0dd1-67a4-465f-9433-77b61472c982" 00:27:33.492 ], 00:27:33.492 "product_name": "NVMe disk", 00:27:33.492 "block_size": 512, 00:27:33.492 "num_blocks": 2097152, 00:27:33.492 "uuid": "df7f0dd1-67a4-465f-9433-77b61472c982", 00:27:33.492 "assigned_rate_limits": { 00:27:33.492 "rw_ios_per_sec": 0, 00:27:33.492 "rw_mbytes_per_sec": 0, 00:27:33.492 "r_mbytes_per_sec": 0, 00:27:33.492 "w_mbytes_per_sec": 0 00:27:33.492 }, 00:27:33.492 "claimed": false, 00:27:33.492 "zoned": false, 00:27:33.492 "supported_io_types": { 00:27:33.492 "read": true, 00:27:33.492 "write": true, 00:27:33.492 "unmap": false, 00:27:33.492 "flush": true, 00:27:33.492 "reset": true, 00:27:33.492 "nvme_admin": true, 00:27:33.492 "nvme_io": true, 00:27:33.492 "nvme_io_md": false, 00:27:33.492 "write_zeroes": true, 00:27:33.492 "zcopy": false, 00:27:33.492 "get_zone_info": false, 00:27:33.492 "zone_management": false, 00:27:33.492 "zone_append": false, 00:27:33.492 "compare": true, 00:27:33.492 "compare_and_write": true, 00:27:33.492 "abort": true, 00:27:33.492 "seek_hole": false, 00:27:33.492 "seek_data": false, 00:27:33.492 "copy": true, 00:27:33.492 "nvme_iov_md": false 00:27:33.492 }, 00:27:33.492 "memory_domains": [ 00:27:33.492 { 00:27:33.492 "dma_device_id": "system", 00:27:33.492 "dma_device_type": 1 00:27:33.492 } 00:27:33.492 ], 00:27:33.492 "driver_specific": { 00:27:33.492 "nvme": [ 00:27:33.492 { 00:27:33.492 "trid": { 00:27:33.492 "trtype": "TCP", 00:27:33.492 "adrfam": "IPv4", 00:27:33.492 "traddr": "10.0.0.2", 00:27:33.492 "trsvcid": "4420", 00:27:33.492 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:33.492 }, 00:27:33.492 "ctrlr_data": { 00:27:33.492 "cntlid": 1, 00:27:33.492 "vendor_id": "0x8086", 00:27:33.492 "model_number": "SPDK bdev Controller", 00:27:33.492 "serial_number": "00000000000000000000", 00:27:33.492 "firmware_revision": "24.09", 00:27:33.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.492 "oacs": { 00:27:33.492 "security": 0, 00:27:33.492 "format": 0, 00:27:33.492 "firmware": 0, 00:27:33.492 "ns_manage": 0 00:27:33.492 }, 00:27:33.492 "multi_ctrlr": true, 00:27:33.492 "ana_reporting": false 00:27:33.492 }, 00:27:33.492 "vs": { 00:27:33.492 "nvme_version": "1.3" 00:27:33.492 }, 00:27:33.492 "ns_data": { 00:27:33.492 "id": 1, 00:27:33.492 "can_share": true 00:27:33.492 } 00:27:33.492 } 00:27:33.492 ], 00:27:33.492 "mp_policy": "active_passive" 00:27:33.492 } 00:27:33.492 } 00:27:33.492 ] 00:27:33.492 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.492 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:33.492 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.492 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.492 [2024-07-13 07:15:02.833711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:33.492 [2024-07-13 07:15:02.833801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b640b0 (9): Bad file descriptor 00:27:33.750 [2024-07-13 07:15:02.966031] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:33.750 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.750 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:33.750 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.750 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.750 [ 00:27:33.750 { 00:27:33.750 "name": "nvme0n1", 00:27:33.750 "aliases": [ 00:27:33.750 "df7f0dd1-67a4-465f-9433-77b61472c982" 00:27:33.750 ], 00:27:33.750 "product_name": "NVMe disk", 00:27:33.750 "block_size": 512, 00:27:33.750 "num_blocks": 2097152, 00:27:33.750 "uuid": "df7f0dd1-67a4-465f-9433-77b61472c982", 00:27:33.750 "assigned_rate_limits": { 00:27:33.750 "rw_ios_per_sec": 0, 00:27:33.750 "rw_mbytes_per_sec": 0, 00:27:33.750 "r_mbytes_per_sec": 0, 00:27:33.750 "w_mbytes_per_sec": 0 00:27:33.750 }, 00:27:33.750 "claimed": false, 00:27:33.750 "zoned": false, 00:27:33.750 "supported_io_types": { 00:27:33.750 "read": true, 00:27:33.750 "write": true, 00:27:33.750 "unmap": false, 00:27:33.750 "flush": true, 00:27:33.750 "reset": true, 00:27:33.750 "nvme_admin": true, 00:27:33.750 "nvme_io": true, 00:27:33.750 "nvme_io_md": false, 00:27:33.750 "write_zeroes": true, 00:27:33.750 "zcopy": false, 00:27:33.750 "get_zone_info": false, 00:27:33.750 "zone_management": false, 00:27:33.750 "zone_append": false, 00:27:33.750 "compare": true, 00:27:33.750 "compare_and_write": true, 00:27:33.750 "abort": true, 00:27:33.750 "seek_hole": false, 00:27:33.750 "seek_data": false, 00:27:33.750 "copy": true, 00:27:33.750 "nvme_iov_md": false 00:27:33.750 }, 00:27:33.750 "memory_domains": [ 00:27:33.750 { 00:27:33.750 "dma_device_id": "system", 00:27:33.750 "dma_device_type": 1 00:27:33.750 } 00:27:33.750 ], 00:27:33.750 "driver_specific": { 00:27:33.750 "nvme": [ 00:27:33.750 { 00:27:33.750 "trid": { 00:27:33.750 "trtype": "TCP", 00:27:33.750 "adrfam": "IPv4", 00:27:33.750 "traddr": "10.0.0.2", 00:27:33.750 "trsvcid": "4420", 00:27:33.750 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:33.750 }, 00:27:33.750 "ctrlr_data": { 00:27:33.750 "cntlid": 2, 00:27:33.750 "vendor_id": "0x8086", 00:27:33.750 "model_number": "SPDK bdev Controller", 00:27:33.750 "serial_number": "00000000000000000000", 00:27:33.750 "firmware_revision": "24.09", 00:27:33.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.750 "oacs": { 00:27:33.750 "security": 0, 00:27:33.750 "format": 0, 00:27:33.750 "firmware": 0, 00:27:33.750 "ns_manage": 0 00:27:33.750 }, 00:27:33.750 "multi_ctrlr": true, 00:27:33.750 "ana_reporting": false 00:27:33.750 }, 00:27:33.750 "vs": { 00:27:33.750 "nvme_version": "1.3" 00:27:33.750 }, 00:27:33.750 "ns_data": { 00:27:33.750 "id": 1, 00:27:33.750 "can_share": true 00:27:33.750 } 00:27:33.750 } 00:27:33.750 ], 00:27:33.750 "mp_policy": "active_passive" 00:27:33.750 } 00:27:33.750 } 00:27:33.750 ] 00:27:33.750 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.750 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.750 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.750 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.750 07:15:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.750 07:15:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NQCbNtXLF1 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NQCbNtXLF1 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.750 [2024-07-13 07:15:03.014327] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:33.750 [2024-07-13 07:15:03.014456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NQCbNtXLF1 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.750 [2024-07-13 07:15:03.022343] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NQCbNtXLF1 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.750 [2024-07-13 07:15:03.030371] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:33.750 [2024-07-13 07:15:03.030431] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:33.750 nvme0n1 00:27:33.750 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.751 [ 00:27:33.751 { 00:27:33.751 "name": "nvme0n1", 00:27:33.751 "aliases": [ 00:27:33.751 "df7f0dd1-67a4-465f-9433-77b61472c982" 00:27:33.751 ], 00:27:33.751 "product_name": "NVMe disk", 00:27:33.751 "block_size": 512, 00:27:33.751 "num_blocks": 2097152, 00:27:33.751 "uuid": "df7f0dd1-67a4-465f-9433-77b61472c982", 00:27:33.751 "assigned_rate_limits": { 00:27:33.751 "rw_ios_per_sec": 0, 00:27:33.751 "rw_mbytes_per_sec": 0, 00:27:33.751 "r_mbytes_per_sec": 0, 00:27:33.751 "w_mbytes_per_sec": 0 00:27:33.751 }, 00:27:33.751 "claimed": false, 00:27:33.751 "zoned": false, 00:27:33.751 "supported_io_types": { 00:27:33.751 "read": true, 00:27:33.751 "write": true, 00:27:33.751 "unmap": false, 00:27:33.751 "flush": true, 00:27:33.751 "reset": true, 00:27:33.751 "nvme_admin": true, 00:27:33.751 "nvme_io": true, 00:27:33.751 "nvme_io_md": false, 00:27:33.751 "write_zeroes": true, 00:27:33.751 "zcopy": false, 00:27:33.751 "get_zone_info": false, 00:27:33.751 "zone_management": false, 00:27:33.751 "zone_append": false, 00:27:33.751 "compare": true, 00:27:33.751 "compare_and_write": true, 00:27:33.751 "abort": true, 00:27:33.751 "seek_hole": false, 00:27:33.751 "seek_data": false, 00:27:33.751 "copy": true, 00:27:33.751 "nvme_iov_md": false 00:27:33.751 }, 00:27:33.751 "memory_domains": [ 00:27:33.751 { 00:27:33.751 "dma_device_id": "system", 00:27:33.751 "dma_device_type": 1 00:27:33.751 } 00:27:33.751 ], 00:27:33.751 "driver_specific": { 00:27:33.751 "nvme": [ 00:27:33.751 { 00:27:33.751 "trid": { 00:27:33.751 "trtype": "TCP", 00:27:33.751 "adrfam": "IPv4", 00:27:33.751 "traddr": "10.0.0.2", 00:27:33.751 "trsvcid": "4421", 00:27:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:33.751 }, 00:27:33.751 "ctrlr_data": { 00:27:33.751 "cntlid": 3, 00:27:33.751 "vendor_id": "0x8086", 00:27:33.751 "model_number": "SPDK bdev Controller", 00:27:33.751 "serial_number": "00000000000000000000", 00:27:33.751 "firmware_revision": "24.09", 00:27:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.751 "oacs": { 00:27:33.751 "security": 0, 00:27:33.751 "format": 0, 00:27:33.751 "firmware": 0, 00:27:33.751 "ns_manage": 0 00:27:33.751 }, 00:27:33.751 "multi_ctrlr": true, 00:27:33.751 "ana_reporting": false 00:27:33.751 }, 00:27:33.751 "vs": { 00:27:33.751 "nvme_version": "1.3" 00:27:33.751 }, 00:27:33.751 "ns_data": { 00:27:33.751 "id": 1, 00:27:33.751 "can_share": true 00:27:33.751 } 00:27:33.751 } 00:27:33.751 ], 00:27:33.751 "mp_policy": "active_passive" 00:27:33.751 } 00:27:33.751 } 00:27:33.751 ] 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.NQCbNtXLF1 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:33.751 rmmod nvme_tcp 00:27:33.751 rmmod nvme_fabrics 00:27:33.751 rmmod nvme_keyring 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1607048 ']' 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1607048 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1607048 ']' 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1607048 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.751 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1607048 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1607048' 00:27:34.009 killing process with pid 1607048 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1607048 00:27:34.009 [2024-07-13 07:15:03.224488] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:34.009 [2024-07-13 07:15:03.224538] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1607048 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.009 07:15:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.611 07:15:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:36.611 00:27:36.611 real 0m5.428s 00:27:36.611 user 0m2.071s 00:27:36.611 sys 0m1.733s 00:27:36.611 07:15:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.611 07:15:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.611 ************************************ 00:27:36.611 END TEST nvmf_async_init 00:27:36.611 ************************************ 00:27:36.611 07:15:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:36.611 07:15:05 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:36.611 07:15:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:36.611 07:15:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.611 07:15:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.611 ************************************ 00:27:36.611 START TEST dma 00:27:36.611 ************************************ 00:27:36.611 07:15:05 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:36.611 * Looking for test storage... 00:27:36.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:36.611 07:15:05 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.611 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.611 07:15:05 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.611 07:15:05 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.611 07:15:05 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.611 07:15:05 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.612 07:15:05 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.612 07:15:05 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.612 07:15:05 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:27:36.612 07:15:05 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.612 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:27:36.612 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:36.612 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:36.612 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.612 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.612 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.612 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:36.612 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:36.612 07:15:05 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:36.612 07:15:05 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:36.612 07:15:05 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:27:36.612 00:27:36.612 real 0m0.061s 00:27:36.612 user 0m0.036s 00:27:36.612 sys 0m0.030s 00:27:36.612 07:15:05 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.612 07:15:05 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:27:36.612 ************************************ 00:27:36.612 END TEST dma 00:27:36.612 ************************************ 00:27:36.612 07:15:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:36.612 07:15:05 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:36.612 07:15:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:36.612 07:15:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.612 07:15:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.612 ************************************ 00:27:36.612 START TEST nvmf_identify 00:27:36.612 ************************************ 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:36.612 * Looking for test storage... 00:27:36.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:36.612 07:15:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:38.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:38.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:38.516 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:38.516 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:27:38.516 00:27:38.516 --- 10.0.0.2 ping statistics --- 00:27:38.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.516 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:27:38.516 00:27:38.516 --- 10.0.0.1 ping statistics --- 00:27:38.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.516 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1609336 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1609336 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1609336 ']' 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.516 07:15:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.516 [2024-07-13 07:15:07.791332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:38.517 [2024-07-13 07:15:07.791414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.517 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.517 [2024-07-13 07:15:07.832412] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:38.517 [2024-07-13 07:15:07.861389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.517 [2024-07-13 07:15:07.949188] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.517 [2024-07-13 07:15:07.949241] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.517 [2024-07-13 07:15:07.949271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.517 [2024-07-13 07:15:07.949283] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.517 [2024-07-13 07:15:07.949293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.517 [2024-07-13 07:15:07.949361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.517 [2024-07-13 07:15:07.949419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.517 [2024-07-13 07:15:07.949475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.517 [2024-07-13 07:15:07.949477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.775 [2024-07-13 07:15:08.083598] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.775 Malloc0 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.775 [2024-07-13 07:15:08.155425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.775 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.775 [ 00:27:38.775 { 00:27:38.775 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:38.775 "subtype": "Discovery", 00:27:38.775 "listen_addresses": [ 00:27:38.775 { 00:27:38.775 "trtype": "TCP", 00:27:38.775 "adrfam": "IPv4", 00:27:38.776 "traddr": "10.0.0.2", 00:27:38.776 "trsvcid": "4420" 00:27:38.776 } 00:27:38.776 ], 00:27:38.776 "allow_any_host": true, 00:27:38.776 "hosts": [] 00:27:38.776 }, 00:27:38.776 { 00:27:38.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.776 "subtype": "NVMe", 00:27:38.776 "listen_addresses": [ 00:27:38.776 { 00:27:38.776 "trtype": "TCP", 00:27:38.776 "adrfam": "IPv4", 00:27:38.776 "traddr": "10.0.0.2", 00:27:38.776 "trsvcid": "4420" 00:27:38.776 } 00:27:38.776 ], 00:27:38.776 "allow_any_host": true, 00:27:38.776 "hosts": [], 00:27:38.776 "serial_number": "SPDK00000000000001", 00:27:38.776 "model_number": "SPDK bdev Controller", 00:27:38.776 "max_namespaces": 32, 00:27:38.776 "min_cntlid": 1, 00:27:38.776 "max_cntlid": 65519, 00:27:38.776 "namespaces": [ 00:27:38.776 { 00:27:38.776 "nsid": 1, 00:27:38.776 "bdev_name": "Malloc0", 00:27:38.776 "name": "Malloc0", 00:27:38.776 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:38.776 "eui64": "ABCDEF0123456789", 00:27:38.776 "uuid": "390be8a2-2e22-412d-96a0-f191e18cac50" 00:27:38.776 } 00:27:38.776 ] 00:27:38.776 } 00:27:38.776 ] 00:27:38.776 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.776 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:38.776 [2024-07-13 07:15:08.197276] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:38.776 [2024-07-13 07:15:08.197322] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609513 ] 00:27:38.776 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.776 [2024-07-13 07:15:08.213572] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:39.038 [2024-07-13 07:15:08.237442] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:39.038 [2024-07-13 07:15:08.237498] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:39.038 [2024-07-13 07:15:08.237507] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:39.038 [2024-07-13 07:15:08.237524] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:39.038 [2024-07-13 07:15:08.237533] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:39.038 [2024-07-13 07:15:08.237820] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:39.038 [2024-07-13 07:15:08.237877] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15f8630 0 00:27:39.038 [2024-07-13 07:15:08.251879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:39.038 [2024-07-13 07:15:08.251899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:39.038 [2024-07-13 07:15:08.251932] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:39.038 [2024-07-13 07:15:08.251938] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:39.038 [2024-07-13 07:15:08.251988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.252001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.252008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f8630) 00:27:39.038 [2024-07-13 07:15:08.252025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:39.038 [2024-07-13 07:15:08.252056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1646f80, cid 0, qid 0 00:27:39.038 [2024-07-13 07:15:08.259882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.038 [2024-07-13 07:15:08.259900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.038 [2024-07-13 07:15:08.259908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.259916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1646f80) on tqpair=0x15f8630 00:27:39.038 [2024-07-13 07:15:08.259931] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:39.038 [2024-07-13 07:15:08.259942] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:39.038 [2024-07-13 07:15:08.259952] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:39.038 [2024-07-13 07:15:08.259973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.259982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.259989] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f8630) 00:27:39.038 [2024-07-13 07:15:08.260000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.038 [2024-07-13 07:15:08.260024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1646f80, cid 0, qid 0 00:27:39.038 [2024-07-13 07:15:08.260187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.038 [2024-07-13 07:15:08.260203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.038 [2024-07-13 07:15:08.260210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.260217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1646f80) on tqpair=0x15f8630 00:27:39.038 [2024-07-13 07:15:08.260225] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:39.038 [2024-07-13 07:15:08.260239] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:39.038 [2024-07-13 07:15:08.260251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.260259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.260266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f8630) 00:27:39.038 [2024-07-13 07:15:08.260276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.038 [2024-07-13 07:15:08.260298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1646f80, cid 0, qid 0 00:27:39.038 [2024-07-13 07:15:08.260409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.038 [2024-07-13 07:15:08.260424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.038 [2024-07-13 07:15:08.260431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.260438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1646f80) on tqpair=0x15f8630 00:27:39.038 [2024-07-13 07:15:08.260446] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:39.038 [2024-07-13 07:15:08.260461] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:39.038 [2024-07-13 07:15:08.260473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.260481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.260487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f8630) 00:27:39.038 [2024-07-13 07:15:08.260498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.038 [2024-07-13 07:15:08.260524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1646f80, cid 0, qid 0 00:27:39.038 [2024-07-13 07:15:08.260656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.038 [2024-07-13 07:15:08.260671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.038 [2024-07-13 07:15:08.260678] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.260685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1646f80) on tqpair=0x15f8630 00:27:39.038 [2024-07-13 07:15:08.260694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:39.038 [2024-07-13 07:15:08.260710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.260720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.260726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f8630) 00:27:39.038 [2024-07-13 07:15:08.260737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.038 [2024-07-13 07:15:08.260758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1646f80, cid 0, qid 0 00:27:39.038 [2024-07-13 07:15:08.260879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.038 [2024-07-13 07:15:08.260895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.038 [2024-07-13 07:15:08.260902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.260910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1646f80) on tqpair=0x15f8630 00:27:39.038 [2024-07-13 07:15:08.260918] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:39.038 [2024-07-13 07:15:08.260927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:39.038 [2024-07-13 07:15:08.260941] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:39.038 [2024-07-13 07:15:08.261051] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:39.038 [2024-07-13 07:15:08.261060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:39.038 [2024-07-13 07:15:08.261073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.261081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.261087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f8630) 00:27:39.038 [2024-07-13 07:15:08.261098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.038 [2024-07-13 07:15:08.261119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1646f80, cid 0, qid 0 00:27:39.038 [2024-07-13 07:15:08.261279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.038 [2024-07-13 07:15:08.261291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.038 [2024-07-13 07:15:08.261298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.261305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1646f80) on tqpair=0x15f8630 00:27:39.038 [2024-07-13 07:15:08.261314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:39.038 [2024-07-13 07:15:08.261330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.261339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.261345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f8630) 00:27:39.038 [2024-07-13 07:15:08.261361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.038 [2024-07-13 07:15:08.261382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1646f80, cid 0, qid 0 00:27:39.038 [2024-07-13 07:15:08.261510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.038 [2024-07-13 07:15:08.261522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.038 [2024-07-13 07:15:08.261529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.261536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1646f80) on tqpair=0x15f8630 00:27:39.038 [2024-07-13 07:15:08.261544] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:39.038 [2024-07-13 07:15:08.261553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:39.038 [2024-07-13 07:15:08.261566] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:39.038 [2024-07-13 07:15:08.261585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:39.038 [2024-07-13 07:15:08.261601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.038 [2024-07-13 07:15:08.261608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f8630) 00:27:39.038 [2024-07-13 07:15:08.261619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.038 [2024-07-13 07:15:08.261641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1646f80, cid 0, qid 0 00:27:39.039 [2024-07-13 07:15:08.261836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.039 [2024-07-13 07:15:08.261852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.039 [2024-07-13 07:15:08.261859] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.261873] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f8630): datao=0, datal=4096, cccid=0 00:27:39.039 [2024-07-13 07:15:08.261882] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1646f80) on tqpair(0x15f8630): expected_datao=0, payload_size=4096 00:27:39.039 [2024-07-13 07:15:08.261891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.261908] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.261918] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.305892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.039 [2024-07-13 07:15:08.305910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.039 [2024-07-13 07:15:08.305918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.305925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1646f80) on tqpair=0x15f8630 00:27:39.039 [2024-07-13 07:15:08.305936] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:39.039 [2024-07-13 07:15:08.305950] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:39.039 [2024-07-13 07:15:08.305958] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:39.039 [2024-07-13 07:15:08.305967] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:39.039 [2024-07-13 07:15:08.305975] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:39.039 [2024-07-13 07:15:08.305983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:39.039 [2024-07-13 07:15:08.306002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:39.039 [2024-07-13 07:15:08.306015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f8630) 00:27:39.039 [2024-07-13 07:15:08.306040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:39.039 [2024-07-13 07:15:08.306062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1646f80, cid 0, qid 0 00:27:39.039 [2024-07-13 07:15:08.306228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.039 [2024-07-13 07:15:08.306241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.039 [2024-07-13 07:15:08.306248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1646f80) on tqpair=0x15f8630 00:27:39.039 [2024-07-13 07:15:08.306267] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f8630) 00:27:39.039 [2024-07-13 07:15:08.306291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.039 [2024-07-13 07:15:08.306301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15f8630) 00:27:39.039 [2024-07-13 07:15:08.306323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.039 [2024-07-13 07:15:08.306332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15f8630) 00:27:39.039 [2024-07-13 07:15:08.306354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.039 [2024-07-13 07:15:08.306363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.039 [2024-07-13 07:15:08.306400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.039 [2024-07-13 07:15:08.306409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:39.039 [2024-07-13 07:15:08.306428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:39.039 [2024-07-13 07:15:08.306441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f8630) 00:27:39.039 [2024-07-13 07:15:08.306457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.039 [2024-07-13 07:15:08.306479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1646f80, cid 0, qid 0 00:27:39.039 [2024-07-13 07:15:08.306505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647100, cid 1, qid 0 00:27:39.039 [2024-07-13 07:15:08.306516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647280, cid 2, qid 0 00:27:39.039 [2024-07-13 07:15:08.306525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.039 [2024-07-13 07:15:08.306533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647580, cid 4, qid 0 00:27:39.039 [2024-07-13 07:15:08.306706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.039 [2024-07-13 07:15:08.306722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.039 [2024-07-13 07:15:08.306729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647580) on tqpair=0x15f8630 00:27:39.039 [2024-07-13 07:15:08.306744] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:39.039 [2024-07-13 07:15:08.306753] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:39.039 [2024-07-13 07:15:08.306770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.306780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f8630) 00:27:39.039 [2024-07-13 07:15:08.306790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.039 [2024-07-13 07:15:08.306827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647580, cid 4, qid 0 00:27:39.039 [2024-07-13 07:15:08.307035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.039 [2024-07-13 07:15:08.307051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.039 [2024-07-13 07:15:08.307058] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.307064] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f8630): datao=0, datal=4096, cccid=4 00:27:39.039 [2024-07-13 07:15:08.307072] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1647580) on tqpair(0x15f8630): expected_datao=0, payload_size=4096 00:27:39.039 [2024-07-13 07:15:08.307079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.307103] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.307111] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.307185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.039 [2024-07-13 07:15:08.307200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.039 [2024-07-13 07:15:08.307207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.307214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647580) on tqpair=0x15f8630 00:27:39.039 [2024-07-13 07:15:08.307232] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:39.039 [2024-07-13 07:15:08.307270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.307281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f8630) 00:27:39.039 [2024-07-13 07:15:08.307291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.039 [2024-07-13 07:15:08.307303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.307310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.039 [2024-07-13 07:15:08.307316] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15f8630) 00:27:39.039 [2024-07-13 07:15:08.307325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.039 [2024-07-13 07:15:08.307352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647580, cid 4, qid 0 00:27:39.039 [2024-07-13 07:15:08.307364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647700, cid 5, qid 0 00:27:39.039 [2024-07-13 07:15:08.307531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.039 [2024-07-13 07:15:08.307547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.040 [2024-07-13 07:15:08.307554] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.307560] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f8630): datao=0, datal=1024, cccid=4 00:27:39.040 [2024-07-13 07:15:08.307568] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1647580) on tqpair(0x15f8630): expected_datao=0, payload_size=1024 00:27:39.040 [2024-07-13 07:15:08.307575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.307585] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.307592] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.307601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.040 [2024-07-13 07:15:08.307610] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.040 [2024-07-13 07:15:08.307616] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.307623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647700) on tqpair=0x15f8630 00:27:39.040 [2024-07-13 07:15:08.348018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.040 [2024-07-13 07:15:08.348038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.040 [2024-07-13 07:15:08.348046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647580) on tqpair=0x15f8630 00:27:39.040 [2024-07-13 07:15:08.348071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f8630) 00:27:39.040 [2024-07-13 07:15:08.348093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.040 [2024-07-13 07:15:08.348122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647580, cid 4, qid 0 00:27:39.040 [2024-07-13 07:15:08.348265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.040 [2024-07-13 07:15:08.348281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.040 [2024-07-13 07:15:08.348289] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348296] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f8630): datao=0, datal=3072, cccid=4 00:27:39.040 [2024-07-13 07:15:08.348304] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1647580) on tqpair(0x15f8630): expected_datao=0, payload_size=3072 00:27:39.040 [2024-07-13 07:15:08.348311] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348321] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348329] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.040 [2024-07-13 07:15:08.348397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.040 [2024-07-13 07:15:08.348404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647580) on tqpair=0x15f8630 00:27:39.040 [2024-07-13 07:15:08.348425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f8630) 00:27:39.040 [2024-07-13 07:15:08.348444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.040 [2024-07-13 07:15:08.348471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647580, cid 4, qid 0 00:27:39.040 [2024-07-13 07:15:08.348612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.040 [2024-07-13 07:15:08.348625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.040 [2024-07-13 07:15:08.348632] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348638] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f8630): datao=0, datal=8, cccid=4 00:27:39.040 [2024-07-13 07:15:08.348646] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1647580) on tqpair(0x15f8630): expected_datao=0, payload_size=8 00:27:39.040 [2024-07-13 07:15:08.348654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348663] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.348670] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.391878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.040 [2024-07-13 07:15:08.391897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.040 [2024-07-13 07:15:08.391904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.040 [2024-07-13 07:15:08.391911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647580) on tqpair=0x15f8630 00:27:39.040 ===================================================== 00:27:39.040 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:39.040 ===================================================== 00:27:39.040 Controller Capabilities/Features 00:27:39.040 ================================ 00:27:39.040 Vendor ID: 0000 00:27:39.040 Subsystem Vendor ID: 0000 00:27:39.040 Serial Number: .................... 00:27:39.040 Model Number: ........................................ 00:27:39.040 Firmware Version: 24.09 00:27:39.040 Recommended Arb Burst: 0 00:27:39.040 IEEE OUI Identifier: 00 00 00 00:27:39.040 Multi-path I/O 00:27:39.040 May have multiple subsystem ports: No 00:27:39.040 May have multiple controllers: No 00:27:39.040 Associated with SR-IOV VF: No 00:27:39.040 Max Data Transfer Size: 131072 00:27:39.040 Max Number of Namespaces: 0 00:27:39.040 Max Number of I/O Queues: 1024 00:27:39.040 NVMe Specification Version (VS): 1.3 00:27:39.040 NVMe Specification Version (Identify): 1.3 00:27:39.040 Maximum Queue Entries: 128 00:27:39.040 Contiguous Queues Required: Yes 00:27:39.040 Arbitration Mechanisms Supported 00:27:39.040 Weighted Round Robin: Not Supported 00:27:39.040 Vendor Specific: Not Supported 00:27:39.040 Reset Timeout: 15000 ms 00:27:39.040 Doorbell Stride: 4 bytes 00:27:39.040 NVM Subsystem Reset: Not Supported 00:27:39.040 Command Sets Supported 00:27:39.040 NVM Command Set: Supported 00:27:39.040 Boot Partition: Not Supported 00:27:39.040 Memory Page Size Minimum: 4096 bytes 00:27:39.040 Memory Page Size Maximum: 4096 bytes 00:27:39.040 Persistent Memory Region: Not Supported 00:27:39.040 Optional Asynchronous Events Supported 00:27:39.040 Namespace Attribute Notices: Not Supported 00:27:39.040 Firmware Activation Notices: Not Supported 00:27:39.040 ANA Change Notices: Not Supported 00:27:39.040 PLE Aggregate Log Change Notices: Not Supported 00:27:39.040 LBA Status Info Alert Notices: Not Supported 00:27:39.040 EGE Aggregate Log Change Notices: Not Supported 00:27:39.040 Normal NVM Subsystem Shutdown event: Not Supported 00:27:39.040 Zone Descriptor Change Notices: Not Supported 00:27:39.040 Discovery Log Change Notices: Supported 00:27:39.040 Controller Attributes 00:27:39.040 128-bit Host Identifier: Not Supported 00:27:39.040 Non-Operational Permissive Mode: Not Supported 00:27:39.040 NVM Sets: Not Supported 00:27:39.040 Read Recovery Levels: Not Supported 00:27:39.040 Endurance Groups: Not Supported 00:27:39.040 Predictable Latency Mode: Not Supported 00:27:39.040 Traffic Based Keep ALive: Not Supported 00:27:39.040 Namespace Granularity: Not Supported 00:27:39.040 SQ Associations: Not Supported 00:27:39.040 UUID List: Not Supported 00:27:39.040 Multi-Domain Subsystem: Not Supported 00:27:39.040 Fixed Capacity Management: Not Supported 00:27:39.040 Variable Capacity Management: Not Supported 00:27:39.040 Delete Endurance Group: Not Supported 00:27:39.040 Delete NVM Set: Not Supported 00:27:39.040 Extended LBA Formats Supported: Not Supported 00:27:39.040 Flexible Data Placement Supported: Not Supported 00:27:39.040 00:27:39.040 Controller Memory Buffer Support 00:27:39.040 ================================ 00:27:39.040 Supported: No 00:27:39.040 00:27:39.040 Persistent Memory Region Support 00:27:39.040 ================================ 00:27:39.040 Supported: No 00:27:39.040 00:27:39.040 Admin Command Set Attributes 00:27:39.040 ============================ 00:27:39.040 Security Send/Receive: Not Supported 00:27:39.040 Format NVM: Not Supported 00:27:39.040 Firmware Activate/Download: Not Supported 00:27:39.040 Namespace Management: Not Supported 00:27:39.040 Device Self-Test: Not Supported 00:27:39.040 Directives: Not Supported 00:27:39.040 NVMe-MI: Not Supported 00:27:39.040 Virtualization Management: Not Supported 00:27:39.040 Doorbell Buffer Config: Not Supported 00:27:39.040 Get LBA Status Capability: Not Supported 00:27:39.040 Command & Feature Lockdown Capability: Not Supported 00:27:39.040 Abort Command Limit: 1 00:27:39.040 Async Event Request Limit: 4 00:27:39.040 Number of Firmware Slots: N/A 00:27:39.040 Firmware Slot 1 Read-Only: N/A 00:27:39.040 Firmware Activation Without Reset: N/A 00:27:39.040 Multiple Update Detection Support: N/A 00:27:39.040 Firmware Update Granularity: No Information Provided 00:27:39.040 Per-Namespace SMART Log: No 00:27:39.040 Asymmetric Namespace Access Log Page: Not Supported 00:27:39.040 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:39.040 Command Effects Log Page: Not Supported 00:27:39.040 Get Log Page Extended Data: Supported 00:27:39.040 Telemetry Log Pages: Not Supported 00:27:39.040 Persistent Event Log Pages: Not Supported 00:27:39.040 Supported Log Pages Log Page: May Support 00:27:39.040 Commands Supported & Effects Log Page: Not Supported 00:27:39.040 Feature Identifiers & Effects Log Page:May Support 00:27:39.040 NVMe-MI Commands & Effects Log Page: May Support 00:27:39.040 Data Area 4 for Telemetry Log: Not Supported 00:27:39.040 Error Log Page Entries Supported: 128 00:27:39.040 Keep Alive: Not Supported 00:27:39.040 00:27:39.040 NVM Command Set Attributes 00:27:39.041 ========================== 00:27:39.041 Submission Queue Entry Size 00:27:39.041 Max: 1 00:27:39.041 Min: 1 00:27:39.041 Completion Queue Entry Size 00:27:39.041 Max: 1 00:27:39.041 Min: 1 00:27:39.041 Number of Namespaces: 0 00:27:39.041 Compare Command: Not Supported 00:27:39.041 Write Uncorrectable Command: Not Supported 00:27:39.041 Dataset Management Command: Not Supported 00:27:39.041 Write Zeroes Command: Not Supported 00:27:39.041 Set Features Save Field: Not Supported 00:27:39.041 Reservations: Not Supported 00:27:39.041 Timestamp: Not Supported 00:27:39.041 Copy: Not Supported 00:27:39.041 Volatile Write Cache: Not Present 00:27:39.041 Atomic Write Unit (Normal): 1 00:27:39.041 Atomic Write Unit (PFail): 1 00:27:39.041 Atomic Compare & Write Unit: 1 00:27:39.041 Fused Compare & Write: Supported 00:27:39.041 Scatter-Gather List 00:27:39.041 SGL Command Set: Supported 00:27:39.041 SGL Keyed: Supported 00:27:39.041 SGL Bit Bucket Descriptor: Not Supported 00:27:39.041 SGL Metadata Pointer: Not Supported 00:27:39.041 Oversized SGL: Not Supported 00:27:39.041 SGL Metadata Address: Not Supported 00:27:39.041 SGL Offset: Supported 00:27:39.041 Transport SGL Data Block: Not Supported 00:27:39.041 Replay Protected Memory Block: Not Supported 00:27:39.041 00:27:39.041 Firmware Slot Information 00:27:39.041 ========================= 00:27:39.041 Active slot: 0 00:27:39.041 00:27:39.041 00:27:39.041 Error Log 00:27:39.041 ========= 00:27:39.041 00:27:39.041 Active Namespaces 00:27:39.041 ================= 00:27:39.041 Discovery Log Page 00:27:39.041 ================== 00:27:39.041 Generation Counter: 2 00:27:39.041 Number of Records: 2 00:27:39.041 Record Format: 0 00:27:39.041 00:27:39.041 Discovery Log Entry 0 00:27:39.041 ---------------------- 00:27:39.041 Transport Type: 3 (TCP) 00:27:39.041 Address Family: 1 (IPv4) 00:27:39.041 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:39.041 Entry Flags: 00:27:39.041 Duplicate Returned Information: 1 00:27:39.041 Explicit Persistent Connection Support for Discovery: 1 00:27:39.041 Transport Requirements: 00:27:39.041 Secure Channel: Not Required 00:27:39.041 Port ID: 0 (0x0000) 00:27:39.041 Controller ID: 65535 (0xffff) 00:27:39.041 Admin Max SQ Size: 128 00:27:39.041 Transport Service Identifier: 4420 00:27:39.041 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:39.041 Transport Address: 10.0.0.2 00:27:39.041 Discovery Log Entry 1 00:27:39.041 ---------------------- 00:27:39.041 Transport Type: 3 (TCP) 00:27:39.041 Address Family: 1 (IPv4) 00:27:39.041 Subsystem Type: 2 (NVM Subsystem) 00:27:39.041 Entry Flags: 00:27:39.041 Duplicate Returned Information: 0 00:27:39.041 Explicit Persistent Connection Support for Discovery: 0 00:27:39.041 Transport Requirements: 00:27:39.041 Secure Channel: Not Required 00:27:39.041 Port ID: 0 (0x0000) 00:27:39.041 Controller ID: 65535 (0xffff) 00:27:39.041 Admin Max SQ Size: 128 00:27:39.041 Transport Service Identifier: 4420 00:27:39.041 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:39.041 Transport Address: 10.0.0.2 [2024-07-13 07:15:08.392037] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:39.041 [2024-07-13 07:15:08.392060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1646f80) on tqpair=0x15f8630 00:27:39.041 [2024-07-13 07:15:08.392072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.041 [2024-07-13 07:15:08.392081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647100) on tqpair=0x15f8630 00:27:39.041 [2024-07-13 07:15:08.392088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.041 [2024-07-13 07:15:08.392096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647280) on tqpair=0x15f8630 00:27:39.041 [2024-07-13 07:15:08.392104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.041 [2024-07-13 07:15:08.392112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.041 [2024-07-13 07:15:08.392119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.041 [2024-07-13 07:15:08.392137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.041 [2024-07-13 07:15:08.392163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.041 [2024-07-13 07:15:08.392203] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.041 [2024-07-13 07:15:08.392421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.041 [2024-07-13 07:15:08.392436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.041 [2024-07-13 07:15:08.392443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.041 [2024-07-13 07:15:08.392462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.041 [2024-07-13 07:15:08.392487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.041 [2024-07-13 07:15:08.392513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.041 [2024-07-13 07:15:08.392654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.041 [2024-07-13 07:15:08.392667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.041 [2024-07-13 07:15:08.392674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.041 [2024-07-13 07:15:08.392689] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:39.041 [2024-07-13 07:15:08.392697] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:39.041 [2024-07-13 07:15:08.392713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.041 [2024-07-13 07:15:08.392738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.041 [2024-07-13 07:15:08.392758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.041 [2024-07-13 07:15:08.392878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.041 [2024-07-13 07:15:08.392892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.041 [2024-07-13 07:15:08.392899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.041 [2024-07-13 07:15:08.392923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.392939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.041 [2024-07-13 07:15:08.392949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.041 [2024-07-13 07:15:08.392970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.041 [2024-07-13 07:15:08.393087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.041 [2024-07-13 07:15:08.393102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.041 [2024-07-13 07:15:08.393109] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.393116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.041 [2024-07-13 07:15:08.393132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.393141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.041 [2024-07-13 07:15:08.393148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.041 [2024-07-13 07:15:08.393158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.393179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.393293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.393308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.042 [2024-07-13 07:15:08.393315] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.393321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.042 [2024-07-13 07:15:08.393338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.393347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.393353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.042 [2024-07-13 07:15:08.393364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.393389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.393552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.393564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.042 [2024-07-13 07:15:08.393571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.393578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.042 [2024-07-13 07:15:08.393594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.393603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.393609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.042 [2024-07-13 07:15:08.393620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.393640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.393753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.393765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.042 [2024-07-13 07:15:08.393772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.393778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.042 [2024-07-13 07:15:08.393794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.393803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.393809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.042 [2024-07-13 07:15:08.393819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.393839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.393963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.393978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.042 [2024-07-13 07:15:08.393985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.393992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.042 [2024-07-13 07:15:08.394008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.042 [2024-07-13 07:15:08.394034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.394055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.394219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.394234] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.042 [2024-07-13 07:15:08.394240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.042 [2024-07-13 07:15:08.394263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.042 [2024-07-13 07:15:08.394289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.394314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.394477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.394489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.042 [2024-07-13 07:15:08.394496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.042 [2024-07-13 07:15:08.394519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.042 [2024-07-13 07:15:08.394545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.394565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.394678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.394693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.042 [2024-07-13 07:15:08.394700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.042 [2024-07-13 07:15:08.394723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394732] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.042 [2024-07-13 07:15:08.394749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.394769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.394881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.394895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.042 [2024-07-13 07:15:08.394902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.042 [2024-07-13 07:15:08.394924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.394940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.042 [2024-07-13 07:15:08.394950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.394971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.395085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.395101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.042 [2024-07-13 07:15:08.395107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.395114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.042 [2024-07-13 07:15:08.395130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.395140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.395146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.042 [2024-07-13 07:15:08.395156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.395177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.395291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.395303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.042 [2024-07-13 07:15:08.395310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.395317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.042 [2024-07-13 07:15:08.395333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.395342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.042 [2024-07-13 07:15:08.395349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.042 [2024-07-13 07:15:08.395359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.042 [2024-07-13 07:15:08.395379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.042 [2024-07-13 07:15:08.395491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.042 [2024-07-13 07:15:08.395506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.043 [2024-07-13 07:15:08.395513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.395520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.043 [2024-07-13 07:15:08.395536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.395545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.395552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.043 [2024-07-13 07:15:08.395562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.043 [2024-07-13 07:15:08.395582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.043 [2024-07-13 07:15:08.395696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.043 [2024-07-13 07:15:08.395711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.043 [2024-07-13 07:15:08.395718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.395724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.043 [2024-07-13 07:15:08.395741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.395750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.395756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.043 [2024-07-13 07:15:08.395767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.043 [2024-07-13 07:15:08.395787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.043 [2024-07-13 07:15:08.395907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.043 [2024-07-13 07:15:08.395922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.043 [2024-07-13 07:15:08.395929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.395936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.043 [2024-07-13 07:15:08.395952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.395962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.395968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.043 [2024-07-13 07:15:08.395978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.043 [2024-07-13 07:15:08.395999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.043 [2024-07-13 07:15:08.396161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.043 [2024-07-13 07:15:08.396179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.043 [2024-07-13 07:15:08.396187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.396194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.043 [2024-07-13 07:15:08.396210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.396219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.396226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.043 [2024-07-13 07:15:08.396236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.043 [2024-07-13 07:15:08.396257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.043 [2024-07-13 07:15:08.396420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.043 [2024-07-13 07:15:08.396435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.043 [2024-07-13 07:15:08.396442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.396449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.043 [2024-07-13 07:15:08.396465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.396474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.396480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.043 [2024-07-13 07:15:08.396491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.043 [2024-07-13 07:15:08.396511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.043 [2024-07-13 07:15:08.396620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.043 [2024-07-13 07:15:08.396632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.043 [2024-07-13 07:15:08.396639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.396646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.043 [2024-07-13 07:15:08.396661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.396670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.396677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.043 [2024-07-13 07:15:08.396687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.043 [2024-07-13 07:15:08.396707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.043 [2024-07-13 07:15:08.400890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.043 [2024-07-13 07:15:08.400907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.043 [2024-07-13 07:15:08.400914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.400920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.043 [2024-07-13 07:15:08.400938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.400947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.400953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f8630) 00:27:39.043 [2024-07-13 07:15:08.400964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.043 [2024-07-13 07:15:08.400985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1647400, cid 3, qid 0 00:27:39.043 [2024-07-13 07:15:08.401167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.043 [2024-07-13 07:15:08.401182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.043 [2024-07-13 07:15:08.401193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.401201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1647400) on tqpair=0x15f8630 00:27:39.043 [2024-07-13 07:15:08.401214] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:27:39.043 00:27:39.043 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:39.043 [2024-07-13 07:15:08.437754] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:39.043 [2024-07-13 07:15:08.437800] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609566 ] 00:27:39.043 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.043 [2024-07-13 07:15:08.455197] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:39.043 [2024-07-13 07:15:08.472657] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:39.043 [2024-07-13 07:15:08.472704] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:39.043 [2024-07-13 07:15:08.472713] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:39.043 [2024-07-13 07:15:08.472732] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:39.043 [2024-07-13 07:15:08.472741] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:39.043 [2024-07-13 07:15:08.472955] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:39.043 [2024-07-13 07:15:08.472997] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c71630 0 00:27:39.043 [2024-07-13 07:15:08.479879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:39.043 [2024-07-13 07:15:08.479921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:39.043 [2024-07-13 07:15:08.479929] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:39.043 [2024-07-13 07:15:08.479935] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:39.043 [2024-07-13 07:15:08.479972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.479983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.479990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c71630) 00:27:39.043 [2024-07-13 07:15:08.480004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:39.043 [2024-07-13 07:15:08.480029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbff80, cid 0, qid 0 00:27:39.043 [2024-07-13 07:15:08.486882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.043 [2024-07-13 07:15:08.486904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.043 [2024-07-13 07:15:08.486912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.486919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbff80) on tqpair=0x1c71630 00:27:39.043 [2024-07-13 07:15:08.486939] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:39.043 [2024-07-13 07:15:08.486951] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:39.043 [2024-07-13 07:15:08.486960] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:39.043 [2024-07-13 07:15:08.486983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.486993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.487000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c71630) 00:27:39.043 [2024-07-13 07:15:08.487011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.043 [2024-07-13 07:15:08.487036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbff80, cid 0, qid 0 00:27:39.043 [2024-07-13 07:15:08.487236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.043 [2024-07-13 07:15:08.487253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.043 [2024-07-13 07:15:08.487275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.487283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbff80) on tqpair=0x1c71630 00:27:39.043 [2024-07-13 07:15:08.487291] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:39.043 [2024-07-13 07:15:08.487306] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:39.043 [2024-07-13 07:15:08.487319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.487327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.043 [2024-07-13 07:15:08.487349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c71630) 00:27:39.044 [2024-07-13 07:15:08.487360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.044 [2024-07-13 07:15:08.487383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbff80, cid 0, qid 0 00:27:39.044 [2024-07-13 07:15:08.487577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.044 [2024-07-13 07:15:08.487593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.044 [2024-07-13 07:15:08.487600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.487607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbff80) on tqpair=0x1c71630 00:27:39.044 [2024-07-13 07:15:08.487615] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:39.044 [2024-07-13 07:15:08.487629] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:39.044 [2024-07-13 07:15:08.487642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.487650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.487656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c71630) 00:27:39.044 [2024-07-13 07:15:08.487667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.044 [2024-07-13 07:15:08.487688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbff80, cid 0, qid 0 00:27:39.044 [2024-07-13 07:15:08.487891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.044 [2024-07-13 07:15:08.487907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.044 [2024-07-13 07:15:08.487914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.487921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbff80) on tqpair=0x1c71630 00:27:39.044 [2024-07-13 07:15:08.487930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:39.044 [2024-07-13 07:15:08.487948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.487957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.487964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c71630) 00:27:39.044 [2024-07-13 07:15:08.487979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.044 [2024-07-13 07:15:08.488002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbff80, cid 0, qid 0 00:27:39.044 [2024-07-13 07:15:08.488120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.044 [2024-07-13 07:15:08.488137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.044 [2024-07-13 07:15:08.488160] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.488167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbff80) on tqpair=0x1c71630 00:27:39.044 [2024-07-13 07:15:08.488174] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:39.044 [2024-07-13 07:15:08.488183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:39.044 [2024-07-13 07:15:08.488196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:39.044 [2024-07-13 07:15:08.488307] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:39.044 [2024-07-13 07:15:08.488324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:39.044 [2024-07-13 07:15:08.488344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.488359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.488371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c71630) 00:27:39.044 [2024-07-13 07:15:08.488382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.044 [2024-07-13 07:15:08.488404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbff80, cid 0, qid 0 00:27:39.044 [2024-07-13 07:15:08.488557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.044 [2024-07-13 07:15:08.488581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.044 [2024-07-13 07:15:08.488596] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.488608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbff80) on tqpair=0x1c71630 00:27:39.044 [2024-07-13 07:15:08.488621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:39.044 [2024-07-13 07:15:08.488647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.488657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.488664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c71630) 00:27:39.044 [2024-07-13 07:15:08.488675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.044 [2024-07-13 07:15:08.488698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbff80, cid 0, qid 0 00:27:39.044 [2024-07-13 07:15:08.488921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.044 [2024-07-13 07:15:08.488938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.044 [2024-07-13 07:15:08.488946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.044 [2024-07-13 07:15:08.488953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbff80) on tqpair=0x1c71630 00:27:39.044 [2024-07-13 07:15:08.488960] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:39.044 [2024-07-13 07:15:08.488969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:39.308 [2024-07-13 07:15:08.488987] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:39.308 [2024-07-13 07:15:08.489006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:39.308 [2024-07-13 07:15:08.489020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.308 [2024-07-13 07:15:08.489028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c71630) 00:27:39.308 [2024-07-13 07:15:08.489039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.308 [2024-07-13 07:15:08.489061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbff80, cid 0, qid 0 00:27:39.308 [2024-07-13 07:15:08.489243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.308 [2024-07-13 07:15:08.489259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.308 [2024-07-13 07:15:08.489266] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.308 [2024-07-13 07:15:08.489273] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c71630): datao=0, datal=4096, cccid=0 00:27:39.308 [2024-07-13 07:15:08.489280] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cbff80) on tqpair(0x1c71630): expected_datao=0, payload_size=4096 00:27:39.308 [2024-07-13 07:15:08.489288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.308 [2024-07-13 07:15:08.489305] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.308 [2024-07-13 07:15:08.489314] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.308 [2024-07-13 07:15:08.529999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.308 [2024-07-13 07:15:08.530020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.308 [2024-07-13 07:15:08.530028] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.308 [2024-07-13 07:15:08.530035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbff80) on tqpair=0x1c71630 00:27:39.308 [2024-07-13 07:15:08.530046] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:39.308 [2024-07-13 07:15:08.530059] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:39.308 [2024-07-13 07:15:08.530068] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:39.308 [2024-07-13 07:15:08.530074] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:39.308 [2024-07-13 07:15:08.530082] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:39.308 [2024-07-13 07:15:08.530090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:39.308 [2024-07-13 07:15:08.530104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:39.308 [2024-07-13 07:15:08.530117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.308 [2024-07-13 07:15:08.530124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.308 [2024-07-13 07:15:08.530131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c71630) 00:27:39.308 [2024-07-13 07:15:08.530142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:39.308 [2024-07-13 07:15:08.530165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbff80, cid 0, qid 0 00:27:39.308 [2024-07-13 07:15:08.530308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.308 [2024-07-13 07:15:08.530320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.308 [2024-07-13 07:15:08.530327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.308 [2024-07-13 07:15:08.530334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbff80) on tqpair=0x1c71630 00:27:39.308 [2024-07-13 07:15:08.530348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.308 [2024-07-13 07:15:08.530356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c71630) 00:27:39.309 [2024-07-13 07:15:08.530372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.309 [2024-07-13 07:15:08.530382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c71630) 00:27:39.309 [2024-07-13 07:15:08.530403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.309 [2024-07-13 07:15:08.530413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c71630) 00:27:39.309 [2024-07-13 07:15:08.530434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.309 [2024-07-13 07:15:08.530443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530471] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.309 [2024-07-13 07:15:08.530479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.309 [2024-07-13 07:15:08.530488] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.530505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.530517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c71630) 00:27:39.309 [2024-07-13 07:15:08.530534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.309 [2024-07-13 07:15:08.530555] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cbff80, cid 0, qid 0 00:27:39.309 [2024-07-13 07:15:08.530581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0100, cid 1, qid 0 00:27:39.309 [2024-07-13 07:15:08.530589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0280, cid 2, qid 0 00:27:39.309 [2024-07-13 07:15:08.530596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.309 [2024-07-13 07:15:08.530604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0580, cid 4, qid 0 00:27:39.309 [2024-07-13 07:15:08.530747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.309 [2024-07-13 07:15:08.530762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.309 [2024-07-13 07:15:08.530769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0580) on tqpair=0x1c71630 00:27:39.309 [2024-07-13 07:15:08.530783] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:39.309 [2024-07-13 07:15:08.530792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.530806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.530820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.530831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.530844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c71630) 00:27:39.309 [2024-07-13 07:15:08.530878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:39.309 [2024-07-13 07:15:08.530902] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0580, cid 4, qid 0 00:27:39.309 [2024-07-13 07:15:08.531078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.309 [2024-07-13 07:15:08.531090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.309 [2024-07-13 07:15:08.531097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0580) on tqpair=0x1c71630 00:27:39.309 [2024-07-13 07:15:08.531183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.531202] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.531215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c71630) 00:27:39.309 [2024-07-13 07:15:08.531248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.309 [2024-07-13 07:15:08.531268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0580, cid 4, qid 0 00:27:39.309 [2024-07-13 07:15:08.531421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.309 [2024-07-13 07:15:08.531436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.309 [2024-07-13 07:15:08.531443] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531449] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c71630): datao=0, datal=4096, cccid=4 00:27:39.309 [2024-07-13 07:15:08.531457] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc0580) on tqpair(0x1c71630): expected_datao=0, payload_size=4096 00:27:39.309 [2024-07-13 07:15:08.531464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531474] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531482] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.309 [2024-07-13 07:15:08.531561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.309 [2024-07-13 07:15:08.531568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0580) on tqpair=0x1c71630 00:27:39.309 [2024-07-13 07:15:08.531589] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:39.309 [2024-07-13 07:15:08.531606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.531622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.531634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c71630) 00:27:39.309 [2024-07-13 07:15:08.531655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.309 [2024-07-13 07:15:08.531676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0580, cid 4, qid 0 00:27:39.309 [2024-07-13 07:15:08.531898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.309 [2024-07-13 07:15:08.531913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.309 [2024-07-13 07:15:08.531920] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531926] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c71630): datao=0, datal=4096, cccid=4 00:27:39.309 [2024-07-13 07:15:08.531934] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc0580) on tqpair(0x1c71630): expected_datao=0, payload_size=4096 00:27:39.309 [2024-07-13 07:15:08.531941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531951] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.531959] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.532026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.309 [2024-07-13 07:15:08.532037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.309 [2024-07-13 07:15:08.532044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.532051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0580) on tqpair=0x1c71630 00:27:39.309 [2024-07-13 07:15:08.532070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.532088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.532102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.532109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c71630) 00:27:39.309 [2024-07-13 07:15:08.532119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.309 [2024-07-13 07:15:08.532141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0580, cid 4, qid 0 00:27:39.309 [2024-07-13 07:15:08.532293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.309 [2024-07-13 07:15:08.532308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.309 [2024-07-13 07:15:08.532314] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.532321] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c71630): datao=0, datal=4096, cccid=4 00:27:39.309 [2024-07-13 07:15:08.532328] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc0580) on tqpair(0x1c71630): expected_datao=0, payload_size=4096 00:27:39.309 [2024-07-13 07:15:08.532335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.532345] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.532352] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.532416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.309 [2024-07-13 07:15:08.532427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.309 [2024-07-13 07:15:08.532434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.309 [2024-07-13 07:15:08.532440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0580) on tqpair=0x1c71630 00:27:39.309 [2024-07-13 07:15:08.532452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.532467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.532486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.532497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.532505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.532513] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:39.309 [2024-07-13 07:15:08.532522] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:39.309 [2024-07-13 07:15:08.532529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:39.310 [2024-07-13 07:15:08.532537] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:39.310 [2024-07-13 07:15:08.532557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.532579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c71630) 00:27:39.310 [2024-07-13 07:15:08.532589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.310 [2024-07-13 07:15:08.532600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.532607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.532613] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c71630) 00:27:39.310 [2024-07-13 07:15:08.532621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.310 [2024-07-13 07:15:08.532645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0580, cid 4, qid 0 00:27:39.310 [2024-07-13 07:15:08.532672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0700, cid 5, qid 0 00:27:39.310 [2024-07-13 07:15:08.532834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.310 [2024-07-13 07:15:08.532863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.310 [2024-07-13 07:15:08.532878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.532885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0580) on tqpair=0x1c71630 00:27:39.310 [2024-07-13 07:15:08.532895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.310 [2024-07-13 07:15:08.532905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.310 [2024-07-13 07:15:08.532911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.532918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0700) on tqpair=0x1c71630 00:27:39.310 [2024-07-13 07:15:08.532935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.532944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c71630) 00:27:39.310 [2024-07-13 07:15:08.532954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.310 [2024-07-13 07:15:08.532975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0700, cid 5, qid 0 00:27:39.310 [2024-07-13 07:15:08.533141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.310 [2024-07-13 07:15:08.533171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.310 [2024-07-13 07:15:08.533178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.533185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0700) on tqpair=0x1c71630 00:27:39.310 [2024-07-13 07:15:08.533201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.533210] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c71630) 00:27:39.310 [2024-07-13 07:15:08.533224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.310 [2024-07-13 07:15:08.533244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0700, cid 5, qid 0 00:27:39.310 [2024-07-13 07:15:08.533371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.310 [2024-07-13 07:15:08.533386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.310 [2024-07-13 07:15:08.533392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.533399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0700) on tqpair=0x1c71630 00:27:39.310 [2024-07-13 07:15:08.533414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.533423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c71630) 00:27:39.310 [2024-07-13 07:15:08.533433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.310 [2024-07-13 07:15:08.533453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0700, cid 5, qid 0 00:27:39.310 [2024-07-13 07:15:08.533623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.310 [2024-07-13 07:15:08.533634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.310 [2024-07-13 07:15:08.533641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.533647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0700) on tqpair=0x1c71630 00:27:39.310 [2024-07-13 07:15:08.533670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.533680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c71630) 00:27:39.310 [2024-07-13 07:15:08.533690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.310 [2024-07-13 07:15:08.533701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.533708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c71630) 00:27:39.310 [2024-07-13 07:15:08.533717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.310 [2024-07-13 07:15:08.533743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.533750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c71630) 00:27:39.310 [2024-07-13 07:15:08.533759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.310 [2024-07-13 07:15:08.533770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.533776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c71630) 00:27:39.310 [2024-07-13 07:15:08.533785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.310 [2024-07-13 07:15:08.533805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0700, cid 5, qid 0 00:27:39.310 [2024-07-13 07:15:08.533831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0580, cid 4, qid 0 00:27:39.310 [2024-07-13 07:15:08.533839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0880, cid 6, qid 0 00:27:39.310 [2024-07-13 07:15:08.533846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0a00, cid 7, qid 0 00:27:39.310 [2024-07-13 07:15:08.535666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.310 [2024-07-13 07:15:08.535682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.310 [2024-07-13 07:15:08.535692] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535699] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c71630): datao=0, datal=8192, cccid=5 00:27:39.310 [2024-07-13 07:15:08.535706] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc0700) on tqpair(0x1c71630): expected_datao=0, payload_size=8192 00:27:39.310 [2024-07-13 07:15:08.535713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535723] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535730] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.310 [2024-07-13 07:15:08.535746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.310 [2024-07-13 07:15:08.535752] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535758] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c71630): datao=0, datal=512, cccid=4 00:27:39.310 [2024-07-13 07:15:08.535765] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc0580) on tqpair(0x1c71630): expected_datao=0, payload_size=512 00:27:39.310 [2024-07-13 07:15:08.535772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535781] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535787] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.310 [2024-07-13 07:15:08.535803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.310 [2024-07-13 07:15:08.535809] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535815] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c71630): datao=0, datal=512, cccid=6 00:27:39.310 [2024-07-13 07:15:08.535822] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc0880) on tqpair(0x1c71630): expected_datao=0, payload_size=512 00:27:39.310 [2024-07-13 07:15:08.535829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535837] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535844] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.310 [2024-07-13 07:15:08.535882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.310 [2024-07-13 07:15:08.535888] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535894] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c71630): datao=0, datal=4096, cccid=7 00:27:39.310 [2024-07-13 07:15:08.535902] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc0a00) on tqpair(0x1c71630): expected_datao=0, payload_size=4096 00:27:39.310 [2024-07-13 07:15:08.535909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535935] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535942] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.310 [2024-07-13 07:15:08.535959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.310 [2024-07-13 07:15:08.535966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.535973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0700) on tqpair=0x1c71630 00:27:39.310 [2024-07-13 07:15:08.535992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.310 [2024-07-13 07:15:08.536002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.310 [2024-07-13 07:15:08.536009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.536016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0580) on tqpair=0x1c71630 00:27:39.310 [2024-07-13 07:15:08.536030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.310 [2024-07-13 07:15:08.536042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.310 [2024-07-13 07:15:08.536049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.536056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0880) on tqpair=0x1c71630 00:27:39.310 [2024-07-13 07:15:08.536066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.310 [2024-07-13 07:15:08.536076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.310 [2024-07-13 07:15:08.536083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.310 [2024-07-13 07:15:08.536089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0a00) on tqpair=0x1c71630 00:27:39.310 ===================================================== 00:27:39.310 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:39.310 ===================================================== 00:27:39.310 Controller Capabilities/Features 00:27:39.310 ================================ 00:27:39.310 Vendor ID: 8086 00:27:39.310 Subsystem Vendor ID: 8086 00:27:39.310 Serial Number: SPDK00000000000001 00:27:39.310 Model Number: SPDK bdev Controller 00:27:39.310 Firmware Version: 24.09 00:27:39.311 Recommended Arb Burst: 6 00:27:39.311 IEEE OUI Identifier: e4 d2 5c 00:27:39.311 Multi-path I/O 00:27:39.311 May have multiple subsystem ports: Yes 00:27:39.311 May have multiple controllers: Yes 00:27:39.311 Associated with SR-IOV VF: No 00:27:39.311 Max Data Transfer Size: 131072 00:27:39.311 Max Number of Namespaces: 32 00:27:39.311 Max Number of I/O Queues: 127 00:27:39.311 NVMe Specification Version (VS): 1.3 00:27:39.311 NVMe Specification Version (Identify): 1.3 00:27:39.311 Maximum Queue Entries: 128 00:27:39.311 Contiguous Queues Required: Yes 00:27:39.311 Arbitration Mechanisms Supported 00:27:39.311 Weighted Round Robin: Not Supported 00:27:39.311 Vendor Specific: Not Supported 00:27:39.311 Reset Timeout: 15000 ms 00:27:39.311 Doorbell Stride: 4 bytes 00:27:39.311 NVM Subsystem Reset: Not Supported 00:27:39.311 Command Sets Supported 00:27:39.311 NVM Command Set: Supported 00:27:39.311 Boot Partition: Not Supported 00:27:39.311 Memory Page Size Minimum: 4096 bytes 00:27:39.311 Memory Page Size Maximum: 4096 bytes 00:27:39.311 Persistent Memory Region: Not Supported 00:27:39.311 Optional Asynchronous Events Supported 00:27:39.311 Namespace Attribute Notices: Supported 00:27:39.311 Firmware Activation Notices: Not Supported 00:27:39.311 ANA Change Notices: Not Supported 00:27:39.311 PLE Aggregate Log Change Notices: Not Supported 00:27:39.311 LBA Status Info Alert Notices: Not Supported 00:27:39.311 EGE Aggregate Log Change Notices: Not Supported 00:27:39.311 Normal NVM Subsystem Shutdown event: Not Supported 00:27:39.311 Zone Descriptor Change Notices: Not Supported 00:27:39.311 Discovery Log Change Notices: Not Supported 00:27:39.311 Controller Attributes 00:27:39.311 128-bit Host Identifier: Supported 00:27:39.311 Non-Operational Permissive Mode: Not Supported 00:27:39.311 NVM Sets: Not Supported 00:27:39.311 Read Recovery Levels: Not Supported 00:27:39.311 Endurance Groups: Not Supported 00:27:39.311 Predictable Latency Mode: Not Supported 00:27:39.311 Traffic Based Keep ALive: Not Supported 00:27:39.311 Namespace Granularity: Not Supported 00:27:39.311 SQ Associations: Not Supported 00:27:39.311 UUID List: Not Supported 00:27:39.311 Multi-Domain Subsystem: Not Supported 00:27:39.311 Fixed Capacity Management: Not Supported 00:27:39.311 Variable Capacity Management: Not Supported 00:27:39.311 Delete Endurance Group: Not Supported 00:27:39.311 Delete NVM Set: Not Supported 00:27:39.311 Extended LBA Formats Supported: Not Supported 00:27:39.311 Flexible Data Placement Supported: Not Supported 00:27:39.311 00:27:39.311 Controller Memory Buffer Support 00:27:39.311 ================================ 00:27:39.311 Supported: No 00:27:39.311 00:27:39.311 Persistent Memory Region Support 00:27:39.311 ================================ 00:27:39.311 Supported: No 00:27:39.311 00:27:39.311 Admin Command Set Attributes 00:27:39.311 ============================ 00:27:39.311 Security Send/Receive: Not Supported 00:27:39.311 Format NVM: Not Supported 00:27:39.311 Firmware Activate/Download: Not Supported 00:27:39.311 Namespace Management: Not Supported 00:27:39.311 Device Self-Test: Not Supported 00:27:39.311 Directives: Not Supported 00:27:39.311 NVMe-MI: Not Supported 00:27:39.311 Virtualization Management: Not Supported 00:27:39.311 Doorbell Buffer Config: Not Supported 00:27:39.311 Get LBA Status Capability: Not Supported 00:27:39.311 Command & Feature Lockdown Capability: Not Supported 00:27:39.311 Abort Command Limit: 4 00:27:39.311 Async Event Request Limit: 4 00:27:39.311 Number of Firmware Slots: N/A 00:27:39.311 Firmware Slot 1 Read-Only: N/A 00:27:39.311 Firmware Activation Without Reset: N/A 00:27:39.311 Multiple Update Detection Support: N/A 00:27:39.311 Firmware Update Granularity: No Information Provided 00:27:39.311 Per-Namespace SMART Log: No 00:27:39.311 Asymmetric Namespace Access Log Page: Not Supported 00:27:39.311 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:39.311 Command Effects Log Page: Supported 00:27:39.311 Get Log Page Extended Data: Supported 00:27:39.311 Telemetry Log Pages: Not Supported 00:27:39.311 Persistent Event Log Pages: Not Supported 00:27:39.311 Supported Log Pages Log Page: May Support 00:27:39.311 Commands Supported & Effects Log Page: Not Supported 00:27:39.311 Feature Identifiers & Effects Log Page:May Support 00:27:39.311 NVMe-MI Commands & Effects Log Page: May Support 00:27:39.311 Data Area 4 for Telemetry Log: Not Supported 00:27:39.311 Error Log Page Entries Supported: 128 00:27:39.311 Keep Alive: Supported 00:27:39.311 Keep Alive Granularity: 10000 ms 00:27:39.311 00:27:39.311 NVM Command Set Attributes 00:27:39.311 ========================== 00:27:39.311 Submission Queue Entry Size 00:27:39.311 Max: 64 00:27:39.311 Min: 64 00:27:39.311 Completion Queue Entry Size 00:27:39.311 Max: 16 00:27:39.311 Min: 16 00:27:39.311 Number of Namespaces: 32 00:27:39.311 Compare Command: Supported 00:27:39.311 Write Uncorrectable Command: Not Supported 00:27:39.311 Dataset Management Command: Supported 00:27:39.311 Write Zeroes Command: Supported 00:27:39.311 Set Features Save Field: Not Supported 00:27:39.311 Reservations: Supported 00:27:39.311 Timestamp: Not Supported 00:27:39.311 Copy: Supported 00:27:39.311 Volatile Write Cache: Present 00:27:39.311 Atomic Write Unit (Normal): 1 00:27:39.311 Atomic Write Unit (PFail): 1 00:27:39.311 Atomic Compare & Write Unit: 1 00:27:39.311 Fused Compare & Write: Supported 00:27:39.311 Scatter-Gather List 00:27:39.311 SGL Command Set: Supported 00:27:39.311 SGL Keyed: Supported 00:27:39.311 SGL Bit Bucket Descriptor: Not Supported 00:27:39.311 SGL Metadata Pointer: Not Supported 00:27:39.311 Oversized SGL: Not Supported 00:27:39.311 SGL Metadata Address: Not Supported 00:27:39.311 SGL Offset: Supported 00:27:39.311 Transport SGL Data Block: Not Supported 00:27:39.311 Replay Protected Memory Block: Not Supported 00:27:39.311 00:27:39.311 Firmware Slot Information 00:27:39.311 ========================= 00:27:39.311 Active slot: 1 00:27:39.311 Slot 1 Firmware Revision: 24.09 00:27:39.311 00:27:39.311 00:27:39.311 Commands Supported and Effects 00:27:39.311 ============================== 00:27:39.311 Admin Commands 00:27:39.311 -------------- 00:27:39.311 Get Log Page (02h): Supported 00:27:39.311 Identify (06h): Supported 00:27:39.311 Abort (08h): Supported 00:27:39.311 Set Features (09h): Supported 00:27:39.311 Get Features (0Ah): Supported 00:27:39.311 Asynchronous Event Request (0Ch): Supported 00:27:39.311 Keep Alive (18h): Supported 00:27:39.311 I/O Commands 00:27:39.311 ------------ 00:27:39.311 Flush (00h): Supported LBA-Change 00:27:39.311 Write (01h): Supported LBA-Change 00:27:39.311 Read (02h): Supported 00:27:39.311 Compare (05h): Supported 00:27:39.311 Write Zeroes (08h): Supported LBA-Change 00:27:39.311 Dataset Management (09h): Supported LBA-Change 00:27:39.311 Copy (19h): Supported LBA-Change 00:27:39.311 00:27:39.311 Error Log 00:27:39.311 ========= 00:27:39.311 00:27:39.311 Arbitration 00:27:39.311 =========== 00:27:39.311 Arbitration Burst: 1 00:27:39.311 00:27:39.311 Power Management 00:27:39.311 ================ 00:27:39.311 Number of Power States: 1 00:27:39.311 Current Power State: Power State #0 00:27:39.311 Power State #0: 00:27:39.311 Max Power: 0.00 W 00:27:39.311 Non-Operational State: Operational 00:27:39.311 Entry Latency: Not Reported 00:27:39.311 Exit Latency: Not Reported 00:27:39.311 Relative Read Throughput: 0 00:27:39.311 Relative Read Latency: 0 00:27:39.311 Relative Write Throughput: 0 00:27:39.311 Relative Write Latency: 0 00:27:39.311 Idle Power: Not Reported 00:27:39.311 Active Power: Not Reported 00:27:39.311 Non-Operational Permissive Mode: Not Supported 00:27:39.311 00:27:39.311 Health Information 00:27:39.311 ================== 00:27:39.311 Critical Warnings: 00:27:39.311 Available Spare Space: OK 00:27:39.311 Temperature: OK 00:27:39.311 Device Reliability: OK 00:27:39.311 Read Only: No 00:27:39.311 Volatile Memory Backup: OK 00:27:39.311 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:39.311 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:39.311 Available Spare: 0% 00:27:39.311 Available Spare Threshold: 0% 00:27:39.311 Life Percentage Used:[2024-07-13 07:15:08.536236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.311 [2024-07-13 07:15:08.536248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c71630) 00:27:39.311 [2024-07-13 07:15:08.536259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.311 [2024-07-13 07:15:08.536281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0a00, cid 7, qid 0 00:27:39.311 [2024-07-13 07:15:08.536470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.311 [2024-07-13 07:15:08.536482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.311 [2024-07-13 07:15:08.536489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.311 [2024-07-13 07:15:08.536495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0a00) on tqpair=0x1c71630 00:27:39.311 [2024-07-13 07:15:08.536543] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:39.311 [2024-07-13 07:15:08.536561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cbff80) on tqpair=0x1c71630 00:27:39.311 [2024-07-13 07:15:08.536571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.312 [2024-07-13 07:15:08.536580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0100) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.536602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.312 [2024-07-13 07:15:08.536611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0280) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.536618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.312 [2024-07-13 07:15:08.536625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.536632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.312 [2024-07-13 07:15:08.536644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.536651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.536657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.312 [2024-07-13 07:15:08.536667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.312 [2024-07-13 07:15:08.536688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.312 [2024-07-13 07:15:08.536826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.312 [2024-07-13 07:15:08.536838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.312 [2024-07-13 07:15:08.536860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.536874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.536887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.536895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.536906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.312 [2024-07-13 07:15:08.536917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.312 [2024-07-13 07:15:08.536944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.312 [2024-07-13 07:15:08.537142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.312 [2024-07-13 07:15:08.537154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.312 [2024-07-13 07:15:08.537161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.537176] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:39.312 [2024-07-13 07:15:08.537183] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:39.312 [2024-07-13 07:15:08.537199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.312 [2024-07-13 07:15:08.537240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.312 [2024-07-13 07:15:08.537259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.312 [2024-07-13 07:15:08.537392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.312 [2024-07-13 07:15:08.537407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.312 [2024-07-13 07:15:08.537414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.537437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.312 [2024-07-13 07:15:08.537463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.312 [2024-07-13 07:15:08.537483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.312 [2024-07-13 07:15:08.537623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.312 [2024-07-13 07:15:08.537637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.312 [2024-07-13 07:15:08.537644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.537666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537675] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.312 [2024-07-13 07:15:08.537692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.312 [2024-07-13 07:15:08.537711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.312 [2024-07-13 07:15:08.537824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.312 [2024-07-13 07:15:08.537839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.312 [2024-07-13 07:15:08.537845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.537898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.537914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.312 [2024-07-13 07:15:08.537925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.312 [2024-07-13 07:15:08.537946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.312 [2024-07-13 07:15:08.538119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.312 [2024-07-13 07:15:08.538131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.312 [2024-07-13 07:15:08.538138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.538177] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.312 [2024-07-13 07:15:08.538202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.312 [2024-07-13 07:15:08.538236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.312 [2024-07-13 07:15:08.538361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.312 [2024-07-13 07:15:08.538376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.312 [2024-07-13 07:15:08.538383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.538406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.312 [2024-07-13 07:15:08.538431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.312 [2024-07-13 07:15:08.538451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.312 [2024-07-13 07:15:08.538563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.312 [2024-07-13 07:15:08.538578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.312 [2024-07-13 07:15:08.538584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.538607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.312 [2024-07-13 07:15:08.538632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.312 [2024-07-13 07:15:08.538652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.312 [2024-07-13 07:15:08.538764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.312 [2024-07-13 07:15:08.538778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.312 [2024-07-13 07:15:08.538785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.538808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.538827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c71630) 00:27:39.312 [2024-07-13 07:15:08.538837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.312 [2024-07-13 07:15:08.540880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc0400, cid 3, qid 0 00:27:39.312 [2024-07-13 07:15:08.541011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.312 [2024-07-13 07:15:08.541023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.312 [2024-07-13 07:15:08.541030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.312 [2024-07-13 07:15:08.541037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc0400) on tqpair=0x1c71630 00:27:39.312 [2024-07-13 07:15:08.541050] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 3 milliseconds 00:27:39.312 0% 00:27:39.312 Data Units Read: 0 00:27:39.312 Data Units Written: 0 00:27:39.312 Host Read Commands: 0 00:27:39.313 Host Write Commands: 0 00:27:39.313 Controller Busy Time: 0 minutes 00:27:39.313 Power Cycles: 0 00:27:39.313 Power On Hours: 0 hours 00:27:39.313 Unsafe Shutdowns: 0 00:27:39.313 Unrecoverable Media Errors: 0 00:27:39.313 Lifetime Error Log Entries: 0 00:27:39.313 Warning Temperature Time: 0 minutes 00:27:39.313 Critical Temperature Time: 0 minutes 00:27:39.313 00:27:39.313 Number of Queues 00:27:39.313 ================ 00:27:39.313 Number of I/O Submission Queues: 127 00:27:39.313 Number of I/O Completion Queues: 127 00:27:39.313 00:27:39.313 Active Namespaces 00:27:39.313 ================= 00:27:39.313 Namespace ID:1 00:27:39.313 Error Recovery Timeout: Unlimited 00:27:39.313 Command Set Identifier: NVM (00h) 00:27:39.313 Deallocate: Supported 00:27:39.313 Deallocated/Unwritten Error: Not Supported 00:27:39.313 Deallocated Read Value: Unknown 00:27:39.313 Deallocate in Write Zeroes: Not Supported 00:27:39.313 Deallocated Guard Field: 0xFFFF 00:27:39.313 Flush: Supported 00:27:39.313 Reservation: Supported 00:27:39.313 Namespace Sharing Capabilities: Multiple Controllers 00:27:39.313 Size (in LBAs): 131072 (0GiB) 00:27:39.313 Capacity (in LBAs): 131072 (0GiB) 00:27:39.313 Utilization (in LBAs): 131072 (0GiB) 00:27:39.313 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:39.313 EUI64: ABCDEF0123456789 00:27:39.313 UUID: 390be8a2-2e22-412d-96a0-f191e18cac50 00:27:39.313 Thin Provisioning: Not Supported 00:27:39.313 Per-NS Atomic Units: Yes 00:27:39.313 Atomic Boundary Size (Normal): 0 00:27:39.313 Atomic Boundary Size (PFail): 0 00:27:39.313 Atomic Boundary Offset: 0 00:27:39.313 Maximum Single Source Range Length: 65535 00:27:39.313 Maximum Copy Length: 65535 00:27:39.313 Maximum Source Range Count: 1 00:27:39.313 NGUID/EUI64 Never Reused: No 00:27:39.313 Namespace Write Protected: No 00:27:39.313 Number of LBA Formats: 1 00:27:39.313 Current LBA Format: LBA Format #00 00:27:39.313 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:39.313 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:39.313 rmmod nvme_tcp 00:27:39.313 rmmod nvme_fabrics 00:27:39.313 rmmod nvme_keyring 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1609336 ']' 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1609336 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1609336 ']' 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1609336 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1609336 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1609336' 00:27:39.313 killing process with pid 1609336 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1609336 00:27:39.313 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1609336 00:27:39.572 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:39.572 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:39.572 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:39.572 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:39.572 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:39.572 07:15:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.572 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.572 07:15:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.109 07:15:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:42.109 00:27:42.109 real 0m5.315s 00:27:42.109 user 0m4.349s 00:27:42.109 sys 0m1.824s 00:27:42.109 07:15:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:42.109 07:15:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.109 ************************************ 00:27:42.109 END TEST nvmf_identify 00:27:42.109 ************************************ 00:27:42.109 07:15:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:42.109 07:15:10 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:42.109 07:15:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:42.109 07:15:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.109 07:15:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:42.109 ************************************ 00:27:42.109 START TEST nvmf_perf 00:27:42.109 ************************************ 00:27:42.109 07:15:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:42.109 * Looking for test storage... 00:27:42.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.109 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:42.110 07:15:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:43.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:43.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:43.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:43.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:43.488 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.746 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.746 07:15:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:43.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:27:43.746 00:27:43.746 --- 10.0.0.2 ping statistics --- 00:27:43.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.746 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:27:43.746 00:27:43.746 --- 10.0.0.1 ping statistics --- 00:27:43.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.746 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1611631 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1611631 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1611631 ']' 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.746 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:43.746 [2024-07-13 07:15:13.142942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:43.746 [2024-07-13 07:15:13.143022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.746 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.746 [2024-07-13 07:15:13.180753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:44.004 [2024-07-13 07:15:13.212021] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:44.004 [2024-07-13 07:15:13.300898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.004 [2024-07-13 07:15:13.300963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.004 [2024-07-13 07:15:13.300977] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.004 [2024-07-13 07:15:13.300989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.004 [2024-07-13 07:15:13.301012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.004 [2024-07-13 07:15:13.301076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.004 [2024-07-13 07:15:13.301143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.004 [2024-07-13 07:15:13.301203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.004 [2024-07-13 07:15:13.301205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.004 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.004 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:27:44.004 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:44.004 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:44.004 07:15:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:44.263 07:15:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.263 07:15:13 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:44.263 07:15:13 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:47.550 07:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:47.550 07:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:47.550 07:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:47.550 07:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:47.808 07:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:47.808 07:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:47.808 07:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:47.808 07:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:47.808 07:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:48.066 [2024-07-13 07:15:17.318992] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.066 07:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:48.324 07:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:48.324 07:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:48.582 07:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:48.582 07:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:48.838 07:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.095 [2024-07-13 07:15:18.314517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.095 07:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:49.353 07:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:49.354 07:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:49.354 07:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:49.354 07:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:50.727 Initializing NVMe Controllers 00:27:50.727 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:27:50.727 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:27:50.727 Initialization complete. Launching workers. 00:27:50.727 ======================================================== 00:27:50.727 Latency(us) 00:27:50.727 Device Information : IOPS MiB/s Average min max 00:27:50.727 PCIE (0000:88:00.0) NSID 1 from core 0: 85418.91 333.67 374.07 42.70 7518.39 00:27:50.727 ======================================================== 00:27:50.727 Total : 85418.91 333.67 374.07 42.70 7518.39 00:27:50.727 00:27:50.727 07:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.727 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.664 Initializing NVMe Controllers 00:27:51.664 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:51.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:51.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:51.664 Initialization complete. Launching workers. 00:27:51.664 ======================================================== 00:27:51.664 Latency(us) 00:27:51.664 Device Information : IOPS MiB/s Average min max 00:27:51.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 61.00 0.24 16877.98 182.16 45155.40 00:27:51.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 47.00 0.18 21542.00 7953.19 47921.41 00:27:51.664 ======================================================== 00:27:51.664 Total : 108.00 0.42 18907.69 182.16 47921.41 00:27:51.664 00:27:51.664 07:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:51.664 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.047 Initializing NVMe Controllers 00:27:53.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:53.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:53.047 Initialization complete. Launching workers. 00:27:53.047 ======================================================== 00:27:53.047 Latency(us) 00:27:53.047 Device Information : IOPS MiB/s Average min max 00:27:53.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8410.43 32.85 3808.55 505.97 10792.06 00:27:53.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3831.74 14.97 8402.46 5516.81 18405.18 00:27:53.047 ======================================================== 00:27:53.047 Total : 12242.17 47.82 5246.42 505.97 18405.18 00:27:53.047 00:27:53.047 07:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:53.047 07:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:53.047 07:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.047 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.575 Initializing NVMe Controllers 00:27:55.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.575 Controller IO queue size 128, less than required. 00:27:55.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:55.575 Controller IO queue size 128, less than required. 00:27:55.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:55.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:55.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:55.575 Initialization complete. Launching workers. 00:27:55.575 ======================================================== 00:27:55.575 Latency(us) 00:27:55.575 Device Information : IOPS MiB/s Average min max 00:27:55.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1037.21 259.30 127519.83 80492.28 204866.88 00:27:55.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 551.52 137.88 239133.48 86834.52 349332.93 00:27:55.575 ======================================================== 00:27:55.575 Total : 1588.73 397.18 166265.81 80492.28 349332.93 00:27:55.575 00:27:55.575 07:15:24 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:55.575 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.575 No valid NVMe controllers or AIO or URING devices found 00:27:55.575 Initializing NVMe Controllers 00:27:55.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.575 Controller IO queue size 128, less than required. 00:27:55.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:55.575 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:55.575 Controller IO queue size 128, less than required. 00:27:55.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:55.575 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:55.575 WARNING: Some requested NVMe devices were skipped 00:27:55.575 07:15:25 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:55.832 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.364 Initializing NVMe Controllers 00:27:58.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.364 Controller IO queue size 128, less than required. 00:27:58.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.364 Controller IO queue size 128, less than required. 00:27:58.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:58.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:58.364 Initialization complete. Launching workers. 00:27:58.364 00:27:58.364 ==================== 00:27:58.364 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:58.364 TCP transport: 00:27:58.364 polls: 20344 00:27:58.364 idle_polls: 9098 00:27:58.364 sock_completions: 11246 00:27:58.364 nvme_completions: 4921 00:27:58.364 submitted_requests: 7278 00:27:58.364 queued_requests: 1 00:27:58.364 00:27:58.364 ==================== 00:27:58.364 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:58.364 TCP transport: 00:27:58.364 polls: 21502 00:27:58.364 idle_polls: 10095 00:27:58.364 sock_completions: 11407 00:27:58.364 nvme_completions: 4927 00:27:58.364 submitted_requests: 7368 00:27:58.364 queued_requests: 1 00:27:58.364 ======================================================== 00:27:58.364 Latency(us) 00:27:58.364 Device Information : IOPS MiB/s Average min max 00:27:58.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1229.98 307.50 106675.43 66513.62 198810.10 00:27:58.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1231.48 307.87 106625.98 46586.25 157026.87 00:27:58.364 ======================================================== 00:27:58.364 Total : 2461.47 615.37 106650.69 46586.25 198810.10 00:27:58.364 00:27:58.364 07:15:27 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:58.364 07:15:27 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.364 07:15:27 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:58.364 07:15:27 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:27:58.364 07:15:27 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:01.653 07:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=1d923a7d-adbf-4aba-b611-44224e73a319 00:28:01.653 07:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 1d923a7d-adbf-4aba-b611-44224e73a319 00:28:01.653 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=1d923a7d-adbf-4aba-b611-44224e73a319 00:28:01.653 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:01.653 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:01.653 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:01.653 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:01.910 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:01.910 { 00:28:01.910 "uuid": "1d923a7d-adbf-4aba-b611-44224e73a319", 00:28:01.910 "name": "lvs_0", 00:28:01.910 "base_bdev": "Nvme0n1", 00:28:01.910 "total_data_clusters": 238234, 00:28:01.910 "free_clusters": 238234, 00:28:01.910 "block_size": 512, 00:28:01.910 "cluster_size": 4194304 00:28:01.910 } 00:28:01.910 ]' 00:28:01.910 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1d923a7d-adbf-4aba-b611-44224e73a319") .free_clusters' 00:28:01.910 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:01.910 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1d923a7d-adbf-4aba-b611-44224e73a319") .cluster_size' 00:28:01.910 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:01.910 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:01.910 07:15:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:01.910 952936 00:28:01.910 07:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:01.910 07:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:01.910 07:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1d923a7d-adbf-4aba-b611-44224e73a319 lbd_0 20480 00:28:02.476 07:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=d5412fdb-da84-4742-b577-57e921ba1946 00:28:02.476 07:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore d5412fdb-da84-4742-b577-57e921ba1946 lvs_n_0 00:28:03.413 07:15:32 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=16af7dd8-3566-4807-9ad4-b67640fc0c6c 00:28:03.413 07:15:32 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 16af7dd8-3566-4807-9ad4-b67640fc0c6c 00:28:03.413 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=16af7dd8-3566-4807-9ad4-b67640fc0c6c 00:28:03.413 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:03.413 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:03.413 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:03.413 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:03.670 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:03.670 { 00:28:03.670 "uuid": "1d923a7d-adbf-4aba-b611-44224e73a319", 00:28:03.670 "name": "lvs_0", 00:28:03.670 "base_bdev": "Nvme0n1", 00:28:03.670 "total_data_clusters": 238234, 00:28:03.670 "free_clusters": 233114, 00:28:03.670 "block_size": 512, 00:28:03.670 "cluster_size": 4194304 00:28:03.670 }, 00:28:03.670 { 00:28:03.670 "uuid": "16af7dd8-3566-4807-9ad4-b67640fc0c6c", 00:28:03.670 "name": "lvs_n_0", 00:28:03.670 "base_bdev": "d5412fdb-da84-4742-b577-57e921ba1946", 00:28:03.670 "total_data_clusters": 5114, 00:28:03.670 "free_clusters": 5114, 00:28:03.670 "block_size": 512, 00:28:03.670 "cluster_size": 4194304 00:28:03.670 } 00:28:03.670 ]' 00:28:03.670 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="16af7dd8-3566-4807-9ad4-b67640fc0c6c") .free_clusters' 00:28:03.670 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:03.670 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="16af7dd8-3566-4807-9ad4-b67640fc0c6c") .cluster_size' 00:28:03.670 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:03.670 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:03.670 07:15:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:03.670 20456 00:28:03.671 07:15:32 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:03.671 07:15:32 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 16af7dd8-3566-4807-9ad4-b67640fc0c6c lbd_nest_0 20456 00:28:03.928 07:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=0a2f6783-22d6-4f6f-8b77-4c853b227c82 00:28:03.928 07:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:04.186 07:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:04.186 07:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 0a2f6783-22d6-4f6f-8b77-4c853b227c82 00:28:04.443 07:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.702 07:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:04.702 07:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:04.702 07:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:04.702 07:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:04.702 07:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:04.702 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.902 Initializing NVMe Controllers 00:28:16.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:16.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:16.902 Initialization complete. Launching workers. 00:28:16.902 ======================================================== 00:28:16.902 Latency(us) 00:28:16.902 Device Information : IOPS MiB/s Average min max 00:28:16.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 51.98 0.03 19269.89 207.38 48542.32 00:28:16.902 ======================================================== 00:28:16.902 Total : 51.98 0.03 19269.89 207.38 48542.32 00:28:16.902 00:28:16.902 07:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:16.902 07:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:16.902 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.935 Initializing NVMe Controllers 00:28:26.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:26.935 Initialization complete. Launching workers. 00:28:26.935 ======================================================== 00:28:26.935 Latency(us) 00:28:26.935 Device Information : IOPS MiB/s Average min max 00:28:26.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.67 10.21 12243.66 6133.18 47896.18 00:28:26.935 ======================================================== 00:28:26.935 Total : 81.67 10.21 12243.66 6133.18 47896.18 00:28:26.935 00:28:26.935 07:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:26.935 07:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:26.935 07:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.935 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.912 Initializing NVMe Controllers 00:28:36.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:36.912 Initialization complete. Launching workers. 00:28:36.912 ======================================================== 00:28:36.912 Latency(us) 00:28:36.912 Device Information : IOPS MiB/s Average min max 00:28:36.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7397.42 3.61 4326.42 296.49 11187.29 00:28:36.912 ======================================================== 00:28:36.912 Total : 7397.42 3.61 4326.42 296.49 11187.29 00:28:36.912 00:28:36.912 07:16:04 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:36.912 07:16:04 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:36.912 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.887 Initializing NVMe Controllers 00:28:46.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:46.887 Initialization complete. Launching workers. 00:28:46.887 ======================================================== 00:28:46.887 Latency(us) 00:28:46.887 Device Information : IOPS MiB/s Average min max 00:28:46.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2007.50 250.94 15956.11 1294.19 54501.70 00:28:46.887 ======================================================== 00:28:46.887 Total : 2007.50 250.94 15956.11 1294.19 54501.70 00:28:46.887 00:28:46.887 07:16:15 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:46.887 07:16:15 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:46.887 07:16:15 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.887 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.858 Initializing NVMe Controllers 00:28:56.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:56.858 Controller IO queue size 128, less than required. 00:28:56.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:56.858 Initialization complete. Launching workers. 00:28:56.858 ======================================================== 00:28:56.858 Latency(us) 00:28:56.858 Device Information : IOPS MiB/s Average min max 00:28:56.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11934.75 5.83 10727.02 1766.56 24763.95 00:28:56.858 ======================================================== 00:28:56.858 Total : 11934.75 5.83 10727.02 1766.56 24763.95 00:28:56.858 00:28:56.858 07:16:25 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:56.858 07:16:25 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.858 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.864 Initializing NVMe Controllers 00:29:06.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:06.864 Controller IO queue size 128, less than required. 00:29:06.864 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:06.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:06.864 Initialization complete. Launching workers. 00:29:06.864 ======================================================== 00:29:06.864 Latency(us) 00:29:06.864 Device Information : IOPS MiB/s Average min max 00:29:06.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1202.60 150.32 107027.90 15107.16 222672.43 00:29:06.864 ======================================================== 00:29:06.864 Total : 1202.60 150.32 107027.90 15107.16 222672.43 00:29:06.864 00:29:07.123 07:16:36 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.381 07:16:36 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0a2f6783-22d6-4f6f-8b77-4c853b227c82 00:29:07.947 07:16:37 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:08.205 07:16:37 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d5412fdb-da84-4742-b577-57e921ba1946 00:29:08.771 07:16:37 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:08.771 07:16:38 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:08.771 07:16:38 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:08.771 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:08.771 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:08.771 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:08.771 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:08.771 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:08.771 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:08.771 rmmod nvme_tcp 00:29:08.771 rmmod nvme_fabrics 00:29:08.771 rmmod nvme_keyring 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1611631 ']' 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1611631 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1611631 ']' 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1611631 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1611631 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1611631' 00:29:09.031 killing process with pid 1611631 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1611631 00:29:09.031 07:16:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1611631 00:29:10.406 07:16:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:10.406 07:16:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:10.406 07:16:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:10.406 07:16:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:10.406 07:16:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:10.406 07:16:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.406 07:16:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:10.406 07:16:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.947 07:16:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:12.947 00:29:12.947 real 1m30.900s 00:29:12.947 user 5m34.917s 00:29:12.947 sys 0m16.076s 00:29:12.947 07:16:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.947 07:16:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:12.947 ************************************ 00:29:12.947 END TEST nvmf_perf 00:29:12.947 ************************************ 00:29:12.947 07:16:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:12.947 07:16:41 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:12.947 07:16:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:12.947 07:16:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.947 07:16:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.947 ************************************ 00:29:12.947 START TEST nvmf_fio_host 00:29:12.947 ************************************ 00:29:12.947 07:16:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:12.947 * Looking for test storage... 00:29:12.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:12.947 07:16:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.947 07:16:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.947 07:16:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.947 07:16:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.947 07:16:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.947 07:16:42 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:12.948 07:16:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.850 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.850 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:14.850 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:14.850 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:14.850 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:14.850 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:14.850 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:14.851 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:14.851 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:14.851 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:14.851 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.851 07:16:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:14.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:29:14.851 00:29:14.851 --- 10.0.0.2 ping statistics --- 00:29:14.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.851 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:29:14.851 00:29:14.851 --- 10.0.0.1 ping statistics --- 00:29:14.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.851 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1623611 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1623611 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1623611 ']' 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.851 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.851 [2024-07-13 07:16:44.149913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:29:14.851 [2024-07-13 07:16:44.150000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.851 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.851 [2024-07-13 07:16:44.186292] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:14.851 [2024-07-13 07:16:44.216773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:15.110 [2024-07-13 07:16:44.309977] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.110 [2024-07-13 07:16:44.310021] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.110 [2024-07-13 07:16:44.310036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.110 [2024-07-13 07:16:44.310048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.110 [2024-07-13 07:16:44.310059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.110 [2024-07-13 07:16:44.310116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.110 [2024-07-13 07:16:44.310173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.110 [2024-07-13 07:16:44.310202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.110 [2024-07-13 07:16:44.310204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.110 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.110 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:29:15.110 07:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:15.368 [2024-07-13 07:16:44.656277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.368 07:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:15.368 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.368 07:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.368 07:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:15.626 Malloc1 00:29:15.626 07:16:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.884 07:16:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:16.142 07:16:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:16.400 [2024-07-13 07:16:45.764038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.400 07:16:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:16.659 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:16.919 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:16.919 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:16.919 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:16.919 07:16:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:16.919 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:16.919 fio-3.35 00:29:16.919 Starting 1 thread 00:29:16.919 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.454 00:29:19.454 test: (groupid=0, jobs=1): err= 0: pid=1623974: Sat Jul 13 07:16:48 2024 00:29:19.454 read: IOPS=9158, BW=35.8MiB/s (37.5MB/s)(71.8MiB/2007msec) 00:29:19.454 slat (nsec): min=1992, max=155469, avg=2826.53, stdev=1887.89 00:29:19.454 clat (usec): min=2623, max=13281, avg=7720.71, stdev=580.09 00:29:19.454 lat (usec): min=2653, max=13284, avg=7723.54, stdev=579.98 00:29:19.454 clat percentiles (usec): 00:29:19.454 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7242], 00:29:19.454 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:29:19.454 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:29:19.454 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11338], 99.95th=[12518], 00:29:19.454 | 99.99th=[13304] 00:29:19.454 bw ( KiB/s): min=35696, max=37248, per=99.96%, avg=36620.00, stdev=660.17, samples=4 00:29:19.454 iops : min= 8924, max= 9312, avg=9155.00, stdev=165.04, samples=4 00:29:19.454 write: IOPS=9168, BW=35.8MiB/s (37.6MB/s)(71.9MiB/2007msec); 0 zone resets 00:29:19.454 slat (usec): min=2, max=143, avg= 3.02, stdev= 1.46 00:29:19.454 clat (usec): min=1386, max=11531, avg=6207.64, stdev=498.28 00:29:19.454 lat (usec): min=1394, max=11534, avg=6210.66, stdev=498.25 00:29:19.454 clat percentiles (usec): 00:29:19.454 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5866], 00:29:19.454 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:29:19.454 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6915], 00:29:19.454 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9765], 99.95th=[10814], 00:29:19.454 | 99.99th=[11469] 00:29:19.454 bw ( KiB/s): min=36432, max=36992, per=100.00%, avg=36690.00, stdev=235.19, samples=4 00:29:19.454 iops : min= 9108, max= 9248, avg=9172.50, stdev=58.80, samples=4 00:29:19.454 lat (msec) : 2=0.02%, 4=0.10%, 10=99.77%, 20=0.12% 00:29:19.454 cpu : usr=59.67%, sys=34.95%, ctx=69, majf=0, minf=41 00:29:19.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:19.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:19.454 issued rwts: total=18381,18401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:19.454 00:29:19.454 Run status group 0 (all jobs): 00:29:19.454 READ: bw=35.8MiB/s (37.5MB/s), 35.8MiB/s-35.8MiB/s (37.5MB/s-37.5MB/s), io=71.8MiB (75.3MB), run=2007-2007msec 00:29:19.454 WRITE: bw=35.8MiB/s (37.6MB/s), 35.8MiB/s-35.8MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2007-2007msec 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:19.454 07:16:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:19.454 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:19.454 fio-3.35 00:29:19.454 Starting 1 thread 00:29:19.712 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.279 00:29:22.279 test: (groupid=0, jobs=1): err= 0: pid=1624417: Sat Jul 13 07:16:51 2024 00:29:22.279 read: IOPS=8372, BW=131MiB/s (137MB/s)(262MiB/2006msec) 00:29:22.279 slat (usec): min=2, max=125, avg= 3.80, stdev= 1.96 00:29:22.279 clat (usec): min=3119, max=16405, avg=8955.97, stdev=2156.02 00:29:22.279 lat (usec): min=3123, max=16408, avg=8959.77, stdev=2156.06 00:29:22.279 clat percentiles (usec): 00:29:22.279 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7111], 00:29:22.279 | 30.00th=[ 7767], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9241], 00:29:22.279 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11863], 95.00th=[12649], 00:29:22.279 | 99.00th=[14877], 99.50th=[15270], 99.90th=[15926], 99.95th=[16057], 00:29:22.279 | 99.99th=[16319] 00:29:22.279 bw ( KiB/s): min=60800, max=74880, per=51.35%, avg=68792.00, stdev=6786.33, samples=4 00:29:22.279 iops : min= 3800, max= 4680, avg=4299.50, stdev=424.15, samples=4 00:29:22.279 write: IOPS=4860, BW=75.9MiB/s (79.6MB/s)(140MiB/1849msec); 0 zone resets 00:29:22.279 slat (usec): min=30, max=162, avg=34.40, stdev= 5.96 00:29:22.280 clat (usec): min=4912, max=18374, avg=11133.90, stdev=1948.25 00:29:22.280 lat (usec): min=4944, max=18406, avg=11168.31, stdev=1948.26 00:29:22.280 clat percentiles (usec): 00:29:22.280 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9503], 00:29:22.280 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11338], 00:29:22.280 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13960], 95.00th=[14877], 00:29:22.280 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17695], 99.95th=[17957], 00:29:22.280 | 99.99th=[18482] 00:29:22.280 bw ( KiB/s): min=63232, max=77888, per=92.05%, avg=71584.00, stdev=7438.79, samples=4 00:29:22.280 iops : min= 3952, max= 4868, avg=4474.00, stdev=464.92, samples=4 00:29:22.280 lat (msec) : 4=0.13%, 10=57.22%, 20=42.65% 00:29:22.280 cpu : usr=75.81%, sys=21.00%, ctx=28, majf=0, minf=59 00:29:22.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:22.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:22.280 issued rwts: total=16795,8987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:22.280 00:29:22.280 Run status group 0 (all jobs): 00:29:22.280 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=262MiB (275MB), run=2006-2006msec 00:29:22.280 WRITE: bw=75.9MiB/s (79.6MB/s), 75.9MiB/s-75.9MiB/s (79.6MB/s-79.6MB/s), io=140MiB (147MB), run=1849-1849msec 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:29:22.280 07:16:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:25.562 Nvme0n1 00:29:25.562 07:16:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=b11b7d3a-8614-44c9-a011-dc8f4dc91efc 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb b11b7d3a-8614-44c9-a011-dc8f4dc91efc 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=b11b7d3a-8614-44c9-a011-dc8f4dc91efc 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:28.848 { 00:29:28.848 "uuid": "b11b7d3a-8614-44c9-a011-dc8f4dc91efc", 00:29:28.848 "name": "lvs_0", 00:29:28.848 "base_bdev": "Nvme0n1", 00:29:28.848 "total_data_clusters": 930, 00:29:28.848 "free_clusters": 930, 00:29:28.848 "block_size": 512, 00:29:28.848 "cluster_size": 1073741824 00:29:28.848 } 00:29:28.848 ]' 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b11b7d3a-8614-44c9-a011-dc8f4dc91efc") .free_clusters' 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b11b7d3a-8614-44c9-a011-dc8f4dc91efc") .cluster_size' 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:29:28.848 952320 00:29:28.848 07:16:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:29.107 3bcb6823-6ea8-4d4f-8d15-dc71c191297f 00:29:29.107 07:16:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:29.365 07:16:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:29.623 07:16:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:29.882 07:16:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:29.882 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:29.882 fio-3.35 00:29:29.882 Starting 1 thread 00:29:30.141 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.675 00:29:32.675 test: (groupid=0, jobs=1): err= 0: pid=1625699: Sat Jul 13 07:17:01 2024 00:29:32.675 read: IOPS=5798, BW=22.6MiB/s (23.7MB/s)(45.5MiB/2007msec) 00:29:32.675 slat (nsec): min=1944, max=134846, avg=2705.03, stdev=2371.65 00:29:32.675 clat (usec): min=816, max=171865, avg=12158.99, stdev=11861.52 00:29:32.675 lat (usec): min=819, max=171918, avg=12161.70, stdev=11861.79 00:29:32.675 clat percentiles (msec): 00:29:32.675 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:29:32.675 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:29:32.675 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:29:32.675 | 99.00th=[ 14], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:29:32.675 | 99.99th=[ 171] 00:29:32.675 bw ( KiB/s): min=15984, max=25576, per=99.69%, avg=23120.00, stdev=4757.68, samples=4 00:29:32.675 iops : min= 3996, max= 6394, avg=5780.00, stdev=1189.42, samples=4 00:29:32.675 write: IOPS=5779, BW=22.6MiB/s (23.7MB/s)(45.3MiB/2007msec); 0 zone resets 00:29:32.675 slat (usec): min=2, max=104, avg= 2.80, stdev= 1.89 00:29:32.675 clat (usec): min=316, max=169463, avg=9789.58, stdev=11114.25 00:29:32.675 lat (usec): min=319, max=169468, avg=9792.38, stdev=11114.46 00:29:32.675 clat percentiles (msec): 00:29:32.675 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:29:32.675 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:29:32.675 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:29:32.675 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:29:32.675 | 99.99th=[ 169] 00:29:32.675 bw ( KiB/s): min=16968, max=25344, per=99.92%, avg=23098.00, stdev=4089.37, samples=4 00:29:32.675 iops : min= 4242, max= 6336, avg=5774.50, stdev=1022.34, samples=4 00:29:32.675 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:32.675 lat (msec) : 2=0.03%, 4=0.13%, 10=48.92%, 20=50.34%, 250=0.55% 00:29:32.675 cpu : usr=51.05%, sys=45.41%, ctx=117, majf=0, minf=41 00:29:32.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:32.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:32.675 issued rwts: total=11637,11599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:32.675 00:29:32.675 Run status group 0 (all jobs): 00:29:32.675 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.5MiB (47.7MB), run=2007-2007msec 00:29:32.675 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.3MiB (47.5MB), run=2007-2007msec 00:29:32.675 07:17:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:32.675 07:17:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:33.608 07:17:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=92b1e4eb-a025-4216-8986-581a8656012e 00:29:33.608 07:17:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 92b1e4eb-a025-4216-8986-581a8656012e 00:29:33.608 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=92b1e4eb-a025-4216-8986-581a8656012e 00:29:33.608 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:33.608 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:33.608 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:33.608 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:33.864 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:33.864 { 00:29:33.864 "uuid": "b11b7d3a-8614-44c9-a011-dc8f4dc91efc", 00:29:33.864 "name": "lvs_0", 00:29:33.864 "base_bdev": "Nvme0n1", 00:29:33.864 "total_data_clusters": 930, 00:29:33.864 "free_clusters": 0, 00:29:33.864 "block_size": 512, 00:29:33.864 "cluster_size": 1073741824 00:29:33.864 }, 00:29:33.864 { 00:29:33.864 "uuid": "92b1e4eb-a025-4216-8986-581a8656012e", 00:29:33.864 "name": "lvs_n_0", 00:29:33.864 "base_bdev": "3bcb6823-6ea8-4d4f-8d15-dc71c191297f", 00:29:33.864 "total_data_clusters": 237847, 00:29:33.864 "free_clusters": 237847, 00:29:33.864 "block_size": 512, 00:29:33.864 "cluster_size": 4194304 00:29:33.864 } 00:29:33.864 ]' 00:29:33.864 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="92b1e4eb-a025-4216-8986-581a8656012e") .free_clusters' 00:29:33.864 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:33.864 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="92b1e4eb-a025-4216-8986-581a8656012e") .cluster_size' 00:29:34.121 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:34.121 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:34.121 07:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:34.121 951388 00:29:34.121 07:17:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:34.686 5fa317c9-b034-4864-8f78-fc8ad8a8c002 00:29:34.686 07:17:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:34.943 07:17:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:35.200 07:17:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:35.458 07:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:35.715 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:35.715 fio-3.35 00:29:35.715 Starting 1 thread 00:29:35.715 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.241 00:29:38.241 test: (groupid=0, jobs=1): err= 0: pid=1626436: Sat Jul 13 07:17:07 2024 00:29:38.241 read: IOPS=5875, BW=22.9MiB/s (24.1MB/s)(46.1MiB/2009msec) 00:29:38.241 slat (usec): min=2, max=120, avg= 2.59, stdev= 1.91 00:29:38.241 clat (usec): min=4246, max=19242, avg=12028.89, stdev=1020.43 00:29:38.241 lat (usec): min=4250, max=19244, avg=12031.49, stdev=1020.35 00:29:38.241 clat percentiles (usec): 00:29:38.241 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:29:38.241 | 30.00th=[11469], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:29:38.241 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13566], 00:29:38.241 | 99.00th=[14353], 99.50th=[14484], 99.90th=[17695], 99.95th=[19006], 00:29:38.241 | 99.99th=[19006] 00:29:38.241 bw ( KiB/s): min=22192, max=24160, per=99.87%, avg=23470.00, stdev=875.44, samples=4 00:29:38.241 iops : min= 5548, max= 6040, avg=5867.50, stdev=218.86, samples=4 00:29:38.241 write: IOPS=5864, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2009msec); 0 zone resets 00:29:38.241 slat (usec): min=2, max=103, avg= 2.73, stdev= 1.47 00:29:38.241 clat (usec): min=2047, max=17327, avg=9618.03, stdev=904.74 00:29:38.241 lat (usec): min=2052, max=17330, avg=9620.77, stdev=904.72 00:29:38.241 clat percentiles (usec): 00:29:38.241 | 1.00th=[ 7504], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:29:38.241 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:29:38.241 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:29:38.241 | 99.00th=[11469], 99.50th=[11994], 99.90th=[16450], 99.95th=[16712], 00:29:38.241 | 99.99th=[17433] 00:29:38.241 bw ( KiB/s): min=23192, max=23616, per=99.95%, avg=23446.00, stdev=187.20, samples=4 00:29:38.241 iops : min= 5798, max= 5904, avg=5861.50, stdev=46.80, samples=4 00:29:38.241 lat (msec) : 4=0.05%, 10=35.06%, 20=64.89% 00:29:38.241 cpu : usr=57.97%, sys=38.30%, ctx=84, majf=0, minf=41 00:29:38.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:38.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:38.241 issued rwts: total=11803,11782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:38.241 00:29:38.241 Run status group 0 (all jobs): 00:29:38.241 READ: bw=22.9MiB/s (24.1MB/s), 22.9MiB/s-22.9MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.3MB), run=2009-2009msec 00:29:38.241 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.3MB), run=2009-2009msec 00:29:38.241 07:17:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:38.241 07:17:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:38.241 07:17:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:42.482 07:17:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:42.482 07:17:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:45.767 07:17:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:45.767 07:17:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:47.671 rmmod nvme_tcp 00:29:47.671 rmmod nvme_fabrics 00:29:47.671 rmmod nvme_keyring 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1623611 ']' 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1623611 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1623611 ']' 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1623611 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1623611 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1623611' 00:29:47.671 killing process with pid 1623611 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1623611 00:29:47.671 07:17:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1623611 00:29:47.671 07:17:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:47.671 07:17:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:47.671 07:17:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:47.671 07:17:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:47.671 07:17:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:47.671 07:17:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.671 07:17:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.671 07:17:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.206 07:17:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:50.206 00:29:50.206 real 0m37.155s 00:29:50.206 user 2m22.230s 00:29:50.206 sys 0m7.269s 00:29:50.206 07:17:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:50.206 07:17:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.206 ************************************ 00:29:50.206 END TEST nvmf_fio_host 00:29:50.206 ************************************ 00:29:50.206 07:17:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:50.206 07:17:19 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:50.206 07:17:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:50.206 07:17:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.206 07:17:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.206 ************************************ 00:29:50.206 START TEST nvmf_failover 00:29:50.206 ************************************ 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:50.206 * Looking for test storage... 00:29:50.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:50.206 07:17:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:52.110 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.110 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:52.110 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:52.110 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:52.110 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:52.110 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:52.110 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:52.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:52.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:52.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:52.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:52.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:29:52.111 00:29:52.111 --- 10.0.0.2 ping statistics --- 00:29:52.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.111 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:29:52.111 00:29:52.111 --- 10.0.0.1 ping statistics --- 00:29:52.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.111 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.111 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1629681 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1629681 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1629681 ']' 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.112 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:52.112 [2024-07-13 07:17:21.456819] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:29:52.112 [2024-07-13 07:17:21.456916] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.112 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.112 [2024-07-13 07:17:21.499736] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:52.112 [2024-07-13 07:17:21.528043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:52.370 [2024-07-13 07:17:21.614451] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.370 [2024-07-13 07:17:21.614508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.370 [2024-07-13 07:17:21.614537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.370 [2024-07-13 07:17:21.614549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.370 [2024-07-13 07:17:21.614559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.370 [2024-07-13 07:17:21.614631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.370 [2024-07-13 07:17:21.614709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.370 [2024-07-13 07:17:21.614712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.370 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.370 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:52.370 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:52.370 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.370 07:17:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:52.370 07:17:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.370 07:17:21 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:52.628 [2024-07-13 07:17:21.967063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.628 07:17:21 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:52.886 Malloc0 00:29:52.886 07:17:22 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.144 07:17:22 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.402 07:17:22 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.660 [2024-07-13 07:17:22.977714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.660 07:17:22 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:53.916 [2024-07-13 07:17:23.214490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:53.916 07:17:23 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:54.174 [2024-07-13 07:17:23.455289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:54.174 07:17:23 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1629967 00:29:54.174 07:17:23 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:54.174 07:17:23 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.174 07:17:23 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1629967 /var/tmp/bdevperf.sock 00:29:54.174 07:17:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1629967 ']' 00:29:54.174 07:17:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:54.174 07:17:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.174 07:17:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:54.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:54.174 07:17:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.174 07:17:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:54.432 07:17:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:54.432 07:17:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:54.432 07:17:23 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:54.690 NVMe0n1 00:29:54.690 07:17:24 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:55.258 00:29:55.258 07:17:24 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1630099 00:29:55.258 07:17:24 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:55.258 07:17:24 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:56.194 07:17:25 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.454 [2024-07-13 07:17:25.666464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.666999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.454 [2024-07-13 07:17:25.667407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 [2024-07-13 07:17:25.667613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc685b0 is same with the state(5) to be set 00:29:56.455 07:17:25 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:59.786 07:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:59.786 00:29:59.786 07:17:29 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:00.045 [2024-07-13 07:17:29.357441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.045 [2024-07-13 07:17:29.357794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.046 [2024-07-13 07:17:29.357805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.046 [2024-07-13 07:17:29.357817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.046 [2024-07-13 07:17:29.357837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.046 [2024-07-13 07:17:29.357849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.046 [2024-07-13 07:17:29.357861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69970 is same with the state(5) to be set 00:30:00.046 07:17:29 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:03.335 07:17:32 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.335 [2024-07-13 07:17:32.658044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.335 07:17:32 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:04.274 07:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:04.531 [2024-07-13 07:17:33.911409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6a050 is same with the state(5) to be set 00:30:04.531 [2024-07-13 07:17:33.911481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6a050 is same with the state(5) to be set 00:30:04.531 [2024-07-13 07:17:33.911505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6a050 is same with the state(5) to be set 00:30:04.531 [2024-07-13 07:17:33.911517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6a050 is same with the state(5) to be set 00:30:04.531 [2024-07-13 07:17:33.911530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6a050 is same with the state(5) to be set 00:30:04.531 [2024-07-13 07:17:33.911542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6a050 is same with the state(5) to be set 00:30:04.531 [2024-07-13 07:17:33.911554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6a050 is same with the state(5) to be set 00:30:04.531 [2024-07-13 07:17:33.911566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6a050 is same with the state(5) to be set 00:30:04.531 [2024-07-13 07:17:33.911578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6a050 is same with the state(5) to be set 00:30:04.531 07:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1630099 00:30:11.103 0 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1629967 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1629967 ']' 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1629967 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629967 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629967' 00:30:11.103 killing process with pid 1629967 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1629967 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1629967 00:30:11.103 07:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:11.103 [2024-07-13 07:17:23.517785] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:11.103 [2024-07-13 07:17:23.517881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1629967 ] 00:30:11.103 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.103 [2024-07-13 07:17:23.549413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:11.103 [2024-07-13 07:17:23.578446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.103 [2024-07-13 07:17:23.667311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.103 Running I/O for 15 seconds... 00:30:11.103 [2024-07-13 07:17:25.669072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.103 [2024-07-13 07:17:25.669765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.103 [2024-07-13 07:17:25.669799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.103 [2024-07-13 07:17:25.669828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.103 [2024-07-13 07:17:25.669882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.103 [2024-07-13 07:17:25.669898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.103 [2024-07-13 07:17:25.669913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.669928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.669941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.669957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.669971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.669986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.669999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.670855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.104 [2024-07-13 07:17:25.670890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.104 [2024-07-13 07:17:25.670919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.104 [2024-07-13 07:17:25.670948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.104 [2024-07-13 07:17:25.670980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.670995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.104 [2024-07-13 07:17:25.671009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.671024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.104 [2024-07-13 07:17:25.671037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.671052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.671067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.671082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.671095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.671110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.671122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.671137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.671150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.671164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.671178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.104 [2024-07-13 07:17:25.671192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.104 [2024-07-13 07:17:25.671205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.105 [2024-07-13 07:17:25.671954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.671983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.105 [2024-07-13 07:17:25.671999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:30:11.105 [2024-07-13 07:17:25.672012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.672029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.105 [2024-07-13 07:17:25.672047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.105 [2024-07-13 07:17:25.672059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:30:11.105 [2024-07-13 07:17:25.672075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.672089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.105 [2024-07-13 07:17:25.672100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.105 [2024-07-13 07:17:25.672111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:30:11.105 [2024-07-13 07:17:25.672124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.672136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.105 [2024-07-13 07:17:25.672147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.105 [2024-07-13 07:17:25.672158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:30:11.105 [2024-07-13 07:17:25.672171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.672184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.105 [2024-07-13 07:17:25.672194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.105 [2024-07-13 07:17:25.672206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:30:11.105 [2024-07-13 07:17:25.672218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.672231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.105 [2024-07-13 07:17:25.672241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.105 [2024-07-13 07:17:25.672252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:30:11.105 [2024-07-13 07:17:25.672265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.672277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.105 [2024-07-13 07:17:25.672288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.105 [2024-07-13 07:17:25.672299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:30:11.105 [2024-07-13 07:17:25.672311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.672324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.105 [2024-07-13 07:17:25.672334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.105 [2024-07-13 07:17:25.672345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:30:11.105 [2024-07-13 07:17:25.672358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.672370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.105 [2024-07-13 07:17:25.672381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.105 [2024-07-13 07:17:25.672392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:30:11.105 [2024-07-13 07:17:25.672404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.672416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.105 [2024-07-13 07:17:25.672432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.105 [2024-07-13 07:17:25.672446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:30:11.105 [2024-07-13 07:17:25.672459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.105 [2024-07-13 07:17:25.672471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.672518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.672565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.672612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78456 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.672658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78464 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.672704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78472 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.672756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78480 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.672804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78488 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.672860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78496 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.672918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78504 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.672965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.672976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.672987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78512 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.672999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78520 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78528 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78544 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78552 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78560 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78568 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78576 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78584 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78592 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.106 [2024-07-13 07:17:25.673509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.106 [2024-07-13 07:17:25.673520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78600 len:8 PRP1 0x0 PRP2 0x0 00:30:11.106 [2024-07-13 07:17:25.673532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673591] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfd3f10 was disconnected and freed. reset controller. 00:30:11.106 [2024-07-13 07:17:25.673609] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:11.106 [2024-07-13 07:17:25.673641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.106 [2024-07-13 07:17:25.673666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.106 [2024-07-13 07:17:25.673700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.106 [2024-07-13 07:17:25.673728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.106 [2024-07-13 07:17:25.673755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.106 [2024-07-13 07:17:25.673769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.106 [2024-07-13 07:17:25.677024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.107 [2024-07-13 07:17:25.677060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfad850 (9): Bad file descriptor 00:30:11.107 [2024-07-13 07:17:25.706860] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:11.107 [2024-07-13 07:17:29.359405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.359976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.359992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.107 [2024-07-13 07:17:29.360404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.107 [2024-07-13 07:17:29.360433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.107 [2024-07-13 07:17:29.360466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.107 [2024-07-13 07:17:29.360481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.360975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.360991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.108 [2024-07-13 07:17:29.361774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.108 [2024-07-13 07:17:29.361788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.361802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.109 [2024-07-13 07:17:29.361816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.361830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.109 [2024-07-13 07:17:29.361846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.361884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.361912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.361925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.361942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.361954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.361966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.361978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.361991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78456 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78464 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.362960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.362971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.362982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78472 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.362995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.363009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.363020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.363032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78480 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.363044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.363057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.109 [2024-07-13 07:17:29.363068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.109 [2024-07-13 07:17:29.363080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78488 len:8 PRP1 0x0 PRP2 0x0 00:30:11.109 [2024-07-13 07:17:29.363093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.109 [2024-07-13 07:17:29.363106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78496 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78504 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78512 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78520 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78528 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78544 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78552 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78560 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78568 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78576 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78584 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78592 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78600 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78608 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78616 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78624 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.363957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.363972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.363984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78632 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.363997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.364010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.364021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.364033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78640 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.364045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.364059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.364069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.364081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78648 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.364093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.364106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.364118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.364129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78656 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.364142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.364155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.364174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.364186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78664 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.364198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.364212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.364223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.364240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78672 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.364253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.364266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.364276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.364288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.364300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.364314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.110 [2024-07-13 07:17:29.364324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.110 [2024-07-13 07:17:29.364335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:30:11.110 [2024-07-13 07:17:29.364348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.110 [2024-07-13 07:17:29.364421] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1178470 was disconnected and freed. reset controller. 00:30:11.110 [2024-07-13 07:17:29.364439] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:11.111 [2024-07-13 07:17:29.364472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.111 [2024-07-13 07:17:29.364491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:29.364505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.111 [2024-07-13 07:17:29.364519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:29.364532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.111 [2024-07-13 07:17:29.364553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:29.364567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.111 [2024-07-13 07:17:29.364580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:29.364593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.111 [2024-07-13 07:17:29.364639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfad850 (9): Bad file descriptor 00:30:11.111 [2024-07-13 07:17:29.367940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.111 [2024-07-13 07:17:29.525989] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:11.111 [2024-07-13 07:17:33.911918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.911959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.911987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.111 [2024-07-13 07:17:33.912442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.111 [2024-07-13 07:17:33.912470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.111 [2024-07-13 07:17:33.912843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.111 [2024-07-13 07:17:33.912858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.912902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.912919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.912937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.912953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.912967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.912982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.912996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.913671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.913717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.913745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.913774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.913803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.913833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.913861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.913898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.913927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.913956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.913973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.913987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.914002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.914015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.914030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.914044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.914059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.914076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.914092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.914106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.914121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.112 [2024-07-13 07:17:33.914134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.914149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.112 [2024-07-13 07:17:33.914163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.112 [2024-07-13 07:17:33.914178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.914982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.914996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.113 [2024-07-13 07:17:33.915326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.113 [2024-07-13 07:17:33.915354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.113 [2024-07-13 07:17:33.915383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.113 [2024-07-13 07:17:33.915421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.113 [2024-07-13 07:17:33.915450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.113 [2024-07-13 07:17:33.915465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.113 [2024-07-13 07:17:33.915479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.114 [2024-07-13 07:17:33.915508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.114 [2024-07-13 07:17:33.915538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.114 [2024-07-13 07:17:33.915566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.114 [2024-07-13 07:17:33.915602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.114 [2024-07-13 07:17:33.915630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.114 [2024-07-13 07:17:33.915660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.114 [2024-07-13 07:17:33.915689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.114 [2024-07-13 07:17:33.915717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.114 [2024-07-13 07:17:33.915746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.114 [2024-07-13 07:17:33.915774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.114 [2024-07-13 07:17:33.915819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.114 [2024-07-13 07:17:33.915832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35480 len:8 PRP1 0x0 PRP2 0x0 00:30:11.114 [2024-07-13 07:17:33.915844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.915918] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfdd5c0 was disconnected and freed. reset controller. 00:30:11.114 [2024-07-13 07:17:33.915937] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:11.114 [2024-07-13 07:17:33.915975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.114 [2024-07-13 07:17:33.915995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.916010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.114 [2024-07-13 07:17:33.916023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.916037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.114 [2024-07-13 07:17:33.916050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.916067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.114 [2024-07-13 07:17:33.916081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.114 [2024-07-13 07:17:33.916094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.114 [2024-07-13 07:17:33.919388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.114 [2024-07-13 07:17:33.919427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfad850 (9): Bad file descriptor 00:30:11.114 [2024-07-13 07:17:34.087576] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:11.114 00:30:11.114 Latency(us) 00:30:11.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.114 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.114 Verification LBA range: start 0x0 length 0x4000 00:30:11.114 NVMe0n1 : 15.05 8282.80 32.35 903.48 0.00 13869.32 819.20 42719.76 00:30:11.114 =================================================================================================================== 00:30:11.114 Total : 8282.80 32.35 903.48 0.00 13869.32 819.20 42719.76 00:30:11.114 Received shutdown signal, test time was about 15.000000 seconds 00:30:11.114 00:30:11.114 Latency(us) 00:30:11.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.114 =================================================================================================================== 00:30:11.114 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1631935 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1631935 /var/tmp/bdevperf.sock 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1631935 ']' 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:11.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:11.114 07:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:11.114 07:17:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:11.114 07:17:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:11.114 07:17:40 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:11.114 [2024-07-13 07:17:40.386835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:11.114 07:17:40 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:11.372 [2024-07-13 07:17:40.627522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:11.372 07:17:40 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:11.629 NVMe0n1 00:30:11.887 07:17:41 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:12.143 00:30:12.143 07:17:41 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:12.707 00:30:12.707 07:17:41 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:12.708 07:17:41 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:12.966 07:17:42 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:13.225 07:17:42 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:16.507 07:17:45 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:16.507 07:17:45 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:16.507 07:17:45 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1632606 00:30:16.507 07:17:45 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:16.507 07:17:45 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1632606 00:30:17.440 0 00:30:17.440 07:17:46 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:17.440 [2024-07-13 07:17:39.909030] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:17.440 [2024-07-13 07:17:39.909121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631935 ] 00:30:17.440 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.440 [2024-07-13 07:17:39.942638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:17.440 [2024-07-13 07:17:39.971353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.440 [2024-07-13 07:17:40.065645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.440 [2024-07-13 07:17:42.406795] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:17.440 [2024-07-13 07:17:42.406889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.440 [2024-07-13 07:17:42.406912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.440 [2024-07-13 07:17:42.406943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.440 [2024-07-13 07:17:42.406958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.440 [2024-07-13 07:17:42.406971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.441 [2024-07-13 07:17:42.406985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.441 [2024-07-13 07:17:42.406999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.441 [2024-07-13 07:17:42.407012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.441 [2024-07-13 07:17:42.407025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.441 [2024-07-13 07:17:42.407067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.441 [2024-07-13 07:17:42.407100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2c850 (9): Bad file descriptor 00:30:17.441 [2024-07-13 07:17:42.412541] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:17.441 Running I/O for 1 seconds... 00:30:17.441 00:30:17.441 Latency(us) 00:30:17.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.441 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:17.441 Verification LBA range: start 0x0 length 0x4000 00:30:17.441 NVMe0n1 : 1.01 8660.47 33.83 0.00 0.00 14717.64 3034.07 12524.66 00:30:17.441 =================================================================================================================== 00:30:17.441 Total : 8660.47 33.83 0.00 0.00 14717.64 3034.07 12524.66 00:30:17.441 07:17:46 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:17.441 07:17:46 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:17.698 07:17:47 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.956 07:17:47 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:17.956 07:17:47 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:18.213 07:17:47 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:18.470 07:17:47 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:21.758 07:17:50 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:21.758 07:17:50 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:21.758 07:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1631935 00:30:21.758 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1631935 ']' 00:30:21.758 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1631935 00:30:21.759 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:21.759 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:21.759 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631935 00:30:21.759 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:21.759 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:21.759 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631935' 00:30:21.759 killing process with pid 1631935 00:30:21.759 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1631935 00:30:21.759 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1631935 00:30:22.018 07:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:22.018 07:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:22.276 rmmod nvme_tcp 00:30:22.276 rmmod nvme_fabrics 00:30:22.276 rmmod nvme_keyring 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1629681 ']' 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1629681 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1629681 ']' 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1629681 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.276 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629681 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629681' 00:30:22.534 killing process with pid 1629681 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1629681 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1629681 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:22.534 07:17:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.073 07:17:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:25.073 00:30:25.073 real 0m34.859s 00:30:25.073 user 2m2.987s 00:30:25.074 sys 0m5.750s 00:30:25.074 07:17:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:25.074 07:17:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:25.074 ************************************ 00:30:25.074 END TEST nvmf_failover 00:30:25.074 ************************************ 00:30:25.074 07:17:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:25.074 07:17:54 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:25.074 07:17:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:25.074 07:17:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:25.074 07:17:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.074 ************************************ 00:30:25.074 START TEST nvmf_host_discovery 00:30:25.074 ************************************ 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:25.074 * Looking for test storage... 00:30:25.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:25.074 07:17:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:26.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:26.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:26.981 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:26.981 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:26.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:30:26.981 00:30:26.981 --- 10.0.0.2 ping statistics --- 00:30:26.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.981 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:30:26.981 00:30:26.981 --- 10.0.0.1 ping statistics --- 00:30:26.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.981 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1635212 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1635212 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1635212 ']' 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:26.981 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.981 [2024-07-13 07:17:56.237813] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:26.981 [2024-07-13 07:17:56.237910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.981 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.981 [2024-07-13 07:17:56.277763] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:26.981 [2024-07-13 07:17:56.309064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.981 [2024-07-13 07:17:56.400563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.981 [2024-07-13 07:17:56.400622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.982 [2024-07-13 07:17:56.400650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.982 [2024-07-13 07:17:56.400664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.982 [2024-07-13 07:17:56.400676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.982 [2024-07-13 07:17:56.400707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.240 [2024-07-13 07:17:56.551746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.240 [2024-07-13 07:17:56.559911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.240 null0 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.240 null1 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1635231 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1635231 /tmp/host.sock 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1635231 ']' 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:27.240 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:27.240 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.240 [2024-07-13 07:17:56.638207] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:27.240 [2024-07-13 07:17:56.638295] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635231 ] 00:30:27.240 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.240 [2024-07-13 07:17:56.674705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:27.499 [2024-07-13 07:17:56.704269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.499 [2024-07-13 07:17:56.791061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:27.499 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.757 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:27.757 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:27.757 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:27.757 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.757 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:27.757 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.757 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:27.757 07:17:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:27.757 07:17:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.757 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:27.757 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:27.757 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.757 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.758 [2024-07-13 07:17:57.193591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:27.758 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:30:28.018 07:17:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:28.587 [2024-07-13 07:17:57.929649] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:28.587 [2024-07-13 07:17:57.929682] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:28.587 [2024-07-13 07:17:57.929711] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:28.846 [2024-07-13 07:17:58.058129] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:28.846 [2024-07-13 07:17:58.160839] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:28.846 [2024-07-13 07:17:58.160883] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:29.105 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:29.106 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:29.365 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.366 [2024-07-13 07:17:58.814402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:29.366 [2024-07-13 07:17:58.814847] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:29.366 [2024-07-13 07:17:58.814893] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:29.366 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.627 [2024-07-13 07:17:58.941722] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:29.627 07:17:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:29.627 [2024-07-13 07:17:59.004348] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:29.627 [2024-07-13 07:17:59.004375] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:29.627 [2024-07-13 07:17:59.004386] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:30.567 07:17:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:30.568 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:30.568 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:30.568 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:30.568 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:30.568 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:30.568 07:17:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:30.568 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.568 07:17:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:30.568 07:17:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.568 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.827 [2024-07-13 07:18:00.050757] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:30.827 [2024-07-13 07:18:00.050804] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:30.827 [2024-07-13 07:18:00.051891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.827 [2024-07-13 07:18:00.051927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.827 [2024-07-13 07:18:00.051945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.827 [2024-07-13 07:18:00.051959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.827 [2024-07-13 07:18:00.051974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.827 [2024-07-13 07:18:00.051988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.827 [2024-07-13 07:18:00.052002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.827 [2024-07-13 07:18:00.052015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.827 [2024-07-13 07:18:00.052028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc6c0 is same with the state(5) to be set 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:30.827 [2024-07-13 07:18:00.061893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc6c0 (9): Bad file descriptor 00:30:30.827 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.827 [2024-07-13 07:18:00.071951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:30.827 [2024-07-13 07:18:00.072178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-07-13 07:18:00.072218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adc6c0 with addr=10.0.0.2, port=4420 00:30:30.828 [2024-07-13 07:18:00.072235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc6c0 is same with the state(5) to be set 00:30:30.828 [2024-07-13 07:18:00.072258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc6c0 (9): Bad file descriptor 00:30:30.828 [2024-07-13 07:18:00.072282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:30.828 [2024-07-13 07:18:00.072296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:30.828 [2024-07-13 07:18:00.072316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:30.828 [2024-07-13 07:18:00.072337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.828 [2024-07-13 07:18:00.082033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:30.828 [2024-07-13 07:18:00.082212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-07-13 07:18:00.082240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adc6c0 with addr=10.0.0.2, port=4420 00:30:30.828 [2024-07-13 07:18:00.082256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc6c0 is same with the state(5) to be set 00:30:30.828 [2024-07-13 07:18:00.082278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc6c0 (9): Bad file descriptor 00:30:30.828 [2024-07-13 07:18:00.082298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:30.828 [2024-07-13 07:18:00.082312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:30.828 [2024-07-13 07:18:00.082325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:30.828 [2024-07-13 07:18:00.082343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.828 [2024-07-13 07:18:00.092105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:30.828 [2024-07-13 07:18:00.092313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-07-13 07:18:00.092340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adc6c0 with addr=10.0.0.2, port=4420 00:30:30.828 [2024-07-13 07:18:00.092357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc6c0 is same with the state(5) to be set 00:30:30.828 [2024-07-13 07:18:00.092380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc6c0 (9): Bad file descriptor 00:30:30.828 [2024-07-13 07:18:00.092400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:30.828 [2024-07-13 07:18:00.092413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:30.828 [2024-07-13 07:18:00.092427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:30.828 [2024-07-13 07:18:00.092445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:30.828 [2024-07-13 07:18:00.102195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:30.828 [2024-07-13 07:18:00.102447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-07-13 07:18:00.102477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adc6c0 with addr=10.0.0.2, port=4420 00:30:30.828 [2024-07-13 07:18:00.102493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc6c0 is same with the state(5) to be set 00:30:30.828 [2024-07-13 07:18:00.102515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc6c0 (9): Bad file descriptor 00:30:30.828 [2024-07-13 07:18:00.102576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:30.828 [2024-07-13 07:18:00.102596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:30.828 [2024-07-13 07:18:00.102609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:30.828 [2024-07-13 07:18:00.102628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.828 [2024-07-13 07:18:00.112273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:30.828 [2024-07-13 07:18:00.112485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-07-13 07:18:00.112513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adc6c0 with addr=10.0.0.2, port=4420 00:30:30.828 [2024-07-13 07:18:00.112529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc6c0 is same with the state(5) to be set 00:30:30.828 [2024-07-13 07:18:00.112552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc6c0 (9): Bad file descriptor 00:30:30.828 [2024-07-13 07:18:00.112572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:30.828 [2024-07-13 07:18:00.112586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:30.828 [2024-07-13 07:18:00.112599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:30.828 [2024-07-13 07:18:00.112630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.828 [2024-07-13 07:18:00.122369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:30.828 [2024-07-13 07:18:00.122564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-07-13 07:18:00.122594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adc6c0 with addr=10.0.0.2, port=4420 00:30:30.828 [2024-07-13 07:18:00.122613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc6c0 is same with the state(5) to be set 00:30:30.828 [2024-07-13 07:18:00.122636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc6c0 (9): Bad file descriptor 00:30:30.828 [2024-07-13 07:18:00.122686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:30.828 [2024-07-13 07:18:00.122707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:30.828 [2024-07-13 07:18:00.122723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:30.828 [2024-07-13 07:18:00.122743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.828 [2024-07-13 07:18:00.132444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:30.828 [2024-07-13 07:18:00.132623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-07-13 07:18:00.132653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adc6c0 with addr=10.0.0.2, port=4420 00:30:30.828 [2024-07-13 07:18:00.132676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc6c0 is same with the state(5) to be set 00:30:30.828 [2024-07-13 07:18:00.132701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc6c0 (9): Bad file descriptor 00:30:30.828 [2024-07-13 07:18:00.132750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:30.828 [2024-07-13 07:18:00.132767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:30.828 [2024-07-13 07:18:00.132780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:30.828 [2024-07-13 07:18:00.132799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.828 [2024-07-13 07:18:00.138753] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:30.828 [2024-07-13 07:18:00.138785] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:30.828 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:30.829 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.088 07:18:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.023 [2024-07-13 07:18:01.368448] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:32.023 [2024-07-13 07:18:01.368480] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:32.023 [2024-07-13 07:18:01.368505] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:32.023 [2024-07-13 07:18:01.455772] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:32.282 [2024-07-13 07:18:01.726608] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:32.282 [2024-07-13 07:18:01.726656] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.282 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.542 request: 00:30:32.542 { 00:30:32.542 "name": "nvme", 00:30:32.542 "trtype": "tcp", 00:30:32.542 "traddr": "10.0.0.2", 00:30:32.542 "adrfam": "ipv4", 00:30:32.542 "trsvcid": "8009", 00:30:32.542 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:32.542 "wait_for_attach": true, 00:30:32.542 "method": "bdev_nvme_start_discovery", 00:30:32.542 "req_id": 1 00:30:32.542 } 00:30:32.542 Got JSON-RPC error response 00:30:32.542 response: 00:30:32.542 { 00:30:32.542 "code": -17, 00:30:32.542 "message": "File exists" 00:30:32.542 } 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.542 request: 00:30:32.542 { 00:30:32.542 "name": "nvme_second", 00:30:32.542 "trtype": "tcp", 00:30:32.542 "traddr": "10.0.0.2", 00:30:32.542 "adrfam": "ipv4", 00:30:32.542 "trsvcid": "8009", 00:30:32.542 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:32.542 "wait_for_attach": true, 00:30:32.542 "method": "bdev_nvme_start_discovery", 00:30:32.542 "req_id": 1 00:30:32.542 } 00:30:32.542 Got JSON-RPC error response 00:30:32.542 response: 00:30:32.542 { 00:30:32.542 "code": -17, 00:30:32.542 "message": "File exists" 00:30:32.542 } 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.542 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.543 07:18:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.479 [2024-07-13 07:18:02.934220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-07-13 07:18:02.934301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af6d50 with addr=10.0.0.2, port=8010 00:30:33.479 [2024-07-13 07:18:02.934335] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:33.479 [2024-07-13 07:18:02.934351] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:33.479 [2024-07-13 07:18:02.934366] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:34.854 [2024-07-13 07:18:03.936536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.854 [2024-07-13 07:18:03.936602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af6d50 with addr=10.0.0.2, port=8010 00:30:34.854 [2024-07-13 07:18:03.936633] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:34.854 [2024-07-13 07:18:03.936649] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:34.854 [2024-07-13 07:18:03.936663] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:35.787 [2024-07-13 07:18:04.938743] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:35.787 request: 00:30:35.787 { 00:30:35.787 "name": "nvme_second", 00:30:35.787 "trtype": "tcp", 00:30:35.787 "traddr": "10.0.0.2", 00:30:35.787 "adrfam": "ipv4", 00:30:35.787 "trsvcid": "8010", 00:30:35.787 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:35.787 "wait_for_attach": false, 00:30:35.787 "attach_timeout_ms": 3000, 00:30:35.787 "method": "bdev_nvme_start_discovery", 00:30:35.787 "req_id": 1 00:30:35.787 } 00:30:35.787 Got JSON-RPC error response 00:30:35.787 response: 00:30:35.787 { 00:30:35.787 "code": -110, 00:30:35.787 "message": "Connection timed out" 00:30:35.787 } 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1635231 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:35.787 07:18:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:35.787 rmmod nvme_tcp 00:30:35.787 rmmod nvme_fabrics 00:30:35.787 rmmod nvme_keyring 00:30:35.787 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:35.787 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:35.787 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1635212 ']' 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1635212 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1635212 ']' 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1635212 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1635212 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1635212' 00:30:35.788 killing process with pid 1635212 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1635212 00:30:35.788 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1635212 00:30:36.084 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:36.084 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:36.084 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:36.084 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:36.084 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:36.084 07:18:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.084 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:36.084 07:18:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:37.986 00:30:37.986 real 0m13.261s 00:30:37.986 user 0m19.322s 00:30:37.986 sys 0m2.782s 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.986 ************************************ 00:30:37.986 END TEST nvmf_host_discovery 00:30:37.986 ************************************ 00:30:37.986 07:18:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:37.986 07:18:07 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:37.986 07:18:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:37.986 07:18:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:37.986 07:18:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:37.986 ************************************ 00:30:37.986 START TEST nvmf_host_multipath_status 00:30:37.986 ************************************ 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:37.986 * Looking for test storage... 00:30:37.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.986 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:38.244 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:38.245 07:18:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:40.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:40.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:40.148 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:40.149 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:40.149 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:40.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:30:40.149 00:30:40.149 --- 10.0.0.2 ping statistics --- 00:30:40.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.149 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:30:40.149 00:30:40.149 --- 10.0.0.1 ping statistics --- 00:30:40.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.149 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1638994 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1638994 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1638994 ']' 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:40.149 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:40.149 [2024-07-13 07:18:09.503778] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:40.149 [2024-07-13 07:18:09.503850] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.149 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.149 [2024-07-13 07:18:09.541686] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:40.149 [2024-07-13 07:18:09.568812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:40.407 [2024-07-13 07:18:09.653312] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.407 [2024-07-13 07:18:09.653368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.407 [2024-07-13 07:18:09.653397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.407 [2024-07-13 07:18:09.653408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.407 [2024-07-13 07:18:09.653418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.407 [2024-07-13 07:18:09.653548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.407 [2024-07-13 07:18:09.653552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.407 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:40.408 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:40.408 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:40.408 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:40.408 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:40.408 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.408 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1638994 00:30:40.408 07:18:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:40.665 [2024-07-13 07:18:09.997422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.665 07:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:40.923 Malloc0 00:30:40.923 07:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:41.180 07:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:41.438 07:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.696 [2024-07-13 07:18:11.015490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.696 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:41.954 [2024-07-13 07:18:11.256164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:41.954 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1639155 00:30:41.954 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:41.954 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:41.954 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1639155 /var/tmp/bdevperf.sock 00:30:41.954 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1639155 ']' 00:30:41.954 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:41.954 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:41.954 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:41.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:41.954 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:41.954 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:42.212 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:42.212 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:42.212 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:42.470 07:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:43.035 Nvme0n1 00:30:43.035 07:18:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:43.599 Nvme0n1 00:30:43.599 07:18:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:43.599 07:18:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:45.497 07:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:45.497 07:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:45.755 07:18:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:46.012 07:18:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:47.383 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:47.383 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:47.383 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.383 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:47.383 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.383 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:47.383 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.383 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:47.640 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:47.640 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:47.640 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.640 07:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:47.898 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.898 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:47.898 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.898 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:48.155 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.155 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:48.155 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.155 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:48.412 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.412 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:48.413 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.413 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:48.669 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.669 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:48.670 07:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:48.927 07:18:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:49.185 07:18:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:50.119 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:50.119 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:50.119 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.119 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:50.377 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.377 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:50.377 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.377 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:50.635 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.635 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:50.635 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.635 07:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:50.924 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.924 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:50.924 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.924 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:51.188 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.188 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:51.188 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.188 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:51.447 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.447 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:51.447 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.447 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:51.705 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.705 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:51.705 07:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:51.963 07:18:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:52.221 07:18:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:53.156 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:53.156 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:53.156 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.156 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:53.415 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.415 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:53.415 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.415 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:53.673 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:53.673 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:53.673 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.673 07:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:53.931 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.931 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:53.931 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.931 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:54.187 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.187 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:54.187 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.187 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:54.444 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.444 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:54.444 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.444 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:54.701 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.701 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:54.701 07:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:54.958 07:18:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:55.216 07:18:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:56.149 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:56.149 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:56.149 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.149 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:56.407 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.407 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:56.407 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.407 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:56.665 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:56.665 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:56.665 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.665 07:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:56.923 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.923 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:56.923 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.923 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:57.181 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.181 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:57.181 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.181 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:57.439 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.439 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:57.439 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.439 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:57.697 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:57.697 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:57.697 07:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:57.955 07:18:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:58.213 07:18:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:59.145 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:59.145 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:59.145 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.145 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:59.402 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:59.402 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:59.402 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.402 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:59.659 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:59.659 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:59.659 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.659 07:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:59.917 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.917 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:59.917 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.917 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:00.174 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.174 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:00.174 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.174 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:00.432 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:00.432 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:00.432 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.432 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:00.690 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:00.690 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:00.690 07:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:00.947 07:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:01.205 07:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:02.137 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:02.137 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:02.137 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.137 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:02.395 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:02.395 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:02.395 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.395 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:02.653 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.653 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:02.653 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.653 07:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:02.909 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.909 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:02.909 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.909 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:03.167 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.167 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:03.167 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.167 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:03.425 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:03.425 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:03.425 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.425 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:03.682 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.682 07:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:03.938 07:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:03.938 07:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:04.286 07:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:04.566 07:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:05.497 07:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:05.497 07:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:05.497 07:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.497 07:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:05.497 07:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.497 07:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:05.497 07:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.497 07:18:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:05.755 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.755 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:05.755 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.755 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:06.013 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.013 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:06.013 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.013 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:06.271 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.271 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:06.271 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.271 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:06.529 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.529 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:06.529 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.529 07:18:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:06.786 07:18:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.786 07:18:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:06.786 07:18:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:07.043 07:18:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:07.300 07:18:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:08.671 07:18:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:08.671 07:18:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:08.671 07:18:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.671 07:18:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:08.671 07:18:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:08.671 07:18:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:08.671 07:18:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.671 07:18:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:08.928 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.928 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:08.928 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.928 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:09.185 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.185 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:09.185 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.185 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:09.442 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.442 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:09.442 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.442 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:09.699 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.699 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:09.699 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.699 07:18:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:09.957 07:18:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.957 07:18:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:09.957 07:18:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:10.214 07:18:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:10.472 07:18:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:11.406 07:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:11.406 07:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:11.406 07:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.406 07:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:11.664 07:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.664 07:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:11.664 07:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.664 07:18:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:11.922 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.922 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:11.922 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.922 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:12.180 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.180 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:12.180 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.180 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:12.439 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.439 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:12.439 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.439 07:18:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:12.697 07:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.697 07:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:12.697 07:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.697 07:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:12.955 07:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.955 07:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:12.955 07:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:13.214 07:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:13.472 07:18:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:14.403 07:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:14.403 07:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:14.403 07:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.403 07:18:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:14.661 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.661 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:14.661 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.661 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:14.919 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:14.919 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:14.919 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.919 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:15.177 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.177 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:15.177 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.177 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:15.435 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.435 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:15.435 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.435 07:18:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:15.693 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.693 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:15.693 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.693 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1639155 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1639155 ']' 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1639155 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1639155 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1639155' 00:31:15.951 killing process with pid 1639155 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1639155 00:31:15.951 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1639155 00:31:16.212 Connection closed with partial response: 00:31:16.212 00:31:16.212 00:31:16.212 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1639155 00:31:16.212 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:16.212 [2024-07-13 07:18:11.315475] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:31:16.212 [2024-07-13 07:18:11.315565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639155 ] 00:31:16.212 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.212 [2024-07-13 07:18:11.349327] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:16.212 [2024-07-13 07:18:11.377861] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.212 [2024-07-13 07:18:11.465302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.212 Running I/O for 90 seconds... 00:31:16.212 [2024-07-13 07:18:27.173199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.173969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.173992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.174023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:16.212 [2024-07-13 07:18:27.174046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.212 [2024-07-13 07:18:27.174128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.174975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.174991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.213 [2024-07-13 07:18:27.175721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:16.213 [2024-07-13 07:18:27.175749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.175765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.175788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.175804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.175827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.175843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.175892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.175916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.175941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.175957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.175981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.175998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.176961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.176989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.177008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.177036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.177053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.177081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.177097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.177124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.214 [2024-07-13 07:18:27.177152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.177180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.214 [2024-07-13 07:18:27.177212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.177240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.214 [2024-07-13 07:18:27.177256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.177283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.214 [2024-07-13 07:18:27.177299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.177326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.214 [2024-07-13 07:18:27.177342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.177369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.214 [2024-07-13 07:18:27.177385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.177413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.214 [2024-07-13 07:18:27.177429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:16.214 [2024-07-13 07:18:27.177456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.215 [2024-07-13 07:18:27.177872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.177972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.177988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:16.215 [2024-07-13 07:18:27.178857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.215 [2024-07-13 07:18:27.178897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.178935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.178952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.178979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.178996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:27.179688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:27.179704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:42.750533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:42.750573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.216 [2024-07-13 07:18:42.750615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:16.216 [2024-07-13 07:18:42.750680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.216 [2024-07-13 07:18:42.750697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.750719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.750735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.750758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.750774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.750797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.750818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.750841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.217 [2024-07-13 07:18:42.750857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.750891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.217 [2024-07-13 07:18:42.750909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.750932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.217 [2024-07-13 07:18:42.750948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.750970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.217 [2024-07-13 07:18:42.750986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.751008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.217 [2024-07-13 07:18:42.751024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.751046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.217 [2024-07-13 07:18:42.751063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.751085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.751102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.751124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.751140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.751162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.751178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.751201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.751217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.751240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.217 [2024-07-13 07:18:42.751256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.754248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.217 [2024-07-13 07:18:42.754276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.754313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.217 [2024-07-13 07:18:42.754332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.754356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.754372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.754394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.754411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.754434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.754452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.754474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.754491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.754514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.754531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.754553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.754571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:16.217 [2024-07-13 07:18:42.754593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.217 [2024-07-13 07:18:42.754611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:16.218 [2024-07-13 07:18:42.754634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.218 [2024-07-13 07:18:42.754651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:16.218 [2024-07-13 07:18:42.754674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.218 [2024-07-13 07:18:42.754690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:16.218 [2024-07-13 07:18:42.754714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.218 [2024-07-13 07:18:42.754731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:16.218 [2024-07-13 07:18:42.754754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.218 [2024-07-13 07:18:42.754771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:16.218 [2024-07-13 07:18:42.754915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.218 [2024-07-13 07:18:42.754936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:16.218 [2024-07-13 07:18:42.754960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.218 [2024-07-13 07:18:42.754976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:16.218 [2024-07-13 07:18:42.754999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.218 [2024-07-13 07:18:42.755014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:16.218 [2024-07-13 07:18:42.755037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.218 [2024-07-13 07:18:42.755152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:16.218 [2024-07-13 07:18:42.755182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.218 [2024-07-13 07:18:42.755200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:16.218 Received shutdown signal, test time was about 32.324105 seconds 00:31:16.218 00:31:16.218 Latency(us) 00:31:16.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.218 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:16.218 Verification LBA range: start 0x0 length 0x4000 00:31:16.218 Nvme0n1 : 32.32 7216.65 28.19 0.00 0.00 17708.01 247.28 4026531.84 00:31:16.218 =================================================================================================================== 00:31:16.218 Total : 7216.65 28.19 0.00 0.00 17708.01 247.28 4026531.84 00:31:16.218 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:16.477 rmmod nvme_tcp 00:31:16.477 rmmod nvme_fabrics 00:31:16.477 rmmod nvme_keyring 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1638994 ']' 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1638994 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1638994 ']' 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1638994 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1638994 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1638994' 00:31:16.477 killing process with pid 1638994 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1638994 00:31:16.477 07:18:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1638994 00:31:16.736 07:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:16.736 07:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:16.736 07:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:16.736 07:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:16.736 07:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:16.736 07:18:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.736 07:18:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:16.736 07:18:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.311 07:18:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:19.311 00:31:19.311 real 0m40.807s 00:31:19.311 user 1m57.732s 00:31:19.311 sys 0m12.501s 00:31:19.311 07:18:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.311 07:18:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:19.311 ************************************ 00:31:19.311 END TEST nvmf_host_multipath_status 00:31:19.311 ************************************ 00:31:19.311 07:18:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:19.311 07:18:48 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:19.311 07:18:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:19.311 07:18:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.311 07:18:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:19.311 ************************************ 00:31:19.311 START TEST nvmf_discovery_remove_ifc 00:31:19.311 ************************************ 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:19.311 * Looking for test storage... 00:31:19.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.311 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:19.312 07:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:21.213 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:21.213 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:21.213 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:21.213 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:21.213 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:21.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:31:21.214 00:31:21.214 --- 10.0.0.2 ping statistics --- 00:31:21.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.214 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:31:21.214 00:31:21.214 --- 10.0.0.1 ping statistics --- 00:31:21.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.214 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1645357 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1645357 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1645357 ']' 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:21.214 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.214 [2024-07-13 07:18:50.508707] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:31:21.214 [2024-07-13 07:18:50.508792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.214 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.214 [2024-07-13 07:18:50.546660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:21.214 [2024-07-13 07:18:50.578927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.471 [2024-07-13 07:18:50.668714] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.471 [2024-07-13 07:18:50.668771] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.471 [2024-07-13 07:18:50.668798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.471 [2024-07-13 07:18:50.668812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.471 [2024-07-13 07:18:50.668823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.471 [2024-07-13 07:18:50.668876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.471 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:21.471 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.472 [2024-07-13 07:18:50.825886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.472 [2024-07-13 07:18:50.834095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:21.472 null0 00:31:21.472 [2024-07-13 07:18:50.866022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1645402 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1645402 /tmp/host.sock 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1645402 ']' 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:21.472 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:21.472 07:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.729 [2024-07-13 07:18:50.933609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:31:21.729 [2024-07-13 07:18:50.933676] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645402 ] 00:31:21.729 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.729 [2024-07-13 07:18:50.965890] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:21.729 [2024-07-13 07:18:50.996342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.729 [2024-07-13 07:18:51.091987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.729 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:21.729 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:21.729 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:21.729 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:21.729 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.729 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.729 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.729 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:21.729 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.729 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.987 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.987 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:21.987 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.987 07:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:22.919 [2024-07-13 07:18:52.314671] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:22.919 [2024-07-13 07:18:52.314711] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:22.919 [2024-07-13 07:18:52.314732] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:23.177 [2024-07-13 07:18:52.442150] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:23.435 [2024-07-13 07:18:52.667382] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:23.435 [2024-07-13 07:18:52.667456] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:23.435 [2024-07-13 07:18:52.667500] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:23.435 [2024-07-13 07:18:52.667526] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:23.435 [2024-07-13 07:18:52.667563] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:23.435 [2024-07-13 07:18:52.672605] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11c3370 was disconnected and freed. delete nvme_qpair. 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:23.435 07:18:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:24.366 07:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:24.366 07:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.366 07:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:24.366 07:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.366 07:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.366 07:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:24.366 07:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:24.623 07:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.623 07:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:24.623 07:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:25.556 07:18:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:25.556 07:18:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.556 07:18:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.556 07:18:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:25.556 07:18:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:25.556 07:18:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:25.556 07:18:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:25.556 07:18:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.556 07:18:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:25.556 07:18:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:26.487 07:18:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:26.487 07:18:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.487 07:18:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:26.487 07:18:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.487 07:18:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.487 07:18:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:26.487 07:18:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:26.487 07:18:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.744 07:18:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:26.744 07:18:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:27.675 07:18:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:27.675 07:18:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.675 07:18:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:27.675 07:18:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.675 07:18:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.675 07:18:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:27.675 07:18:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:27.675 07:18:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.675 07:18:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:27.675 07:18:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:28.607 07:18:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:28.607 07:18:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.607 07:18:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:28.607 07:18:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.607 07:18:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:28.607 07:18:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:28.607 07:18:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:28.607 07:18:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.607 07:18:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:28.607 07:18:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:28.864 [2024-07-13 07:18:58.109209] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:28.864 [2024-07-13 07:18:58.109272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.864 [2024-07-13 07:18:58.109292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.864 [2024-07-13 07:18:58.109308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.864 [2024-07-13 07:18:58.109321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.864 [2024-07-13 07:18:58.109334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.864 [2024-07-13 07:18:58.109347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.864 [2024-07-13 07:18:58.109360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.864 [2024-07-13 07:18:58.109372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.864 [2024-07-13 07:18:58.109386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.864 [2024-07-13 07:18:58.109399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.864 [2024-07-13 07:18:58.109418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1189d50 is same with the state(5) to be set 00:31:28.864 [2024-07-13 07:18:58.119234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189d50 (9): Bad file descriptor 00:31:28.864 [2024-07-13 07:18:58.129277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:29.798 07:18:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:29.798 07:18:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.798 07:18:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:29.798 07:18:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.798 07:18:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:29.798 07:18:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:29.799 07:18:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:29.799 [2024-07-13 07:18:59.165915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:29.799 [2024-07-13 07:18:59.165985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1189d50 with addr=10.0.0.2, port=4420 00:31:29.799 [2024-07-13 07:18:59.166013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1189d50 is same with the state(5) to be set 00:31:29.799 [2024-07-13 07:18:59.166072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189d50 (9): Bad file descriptor 00:31:29.799 [2024-07-13 07:18:59.166565] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:29.799 [2024-07-13 07:18:59.166601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:29.799 [2024-07-13 07:18:59.166619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:29.799 [2024-07-13 07:18:59.166637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:29.799 [2024-07-13 07:18:59.166672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:29.799 [2024-07-13 07:18:59.166692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:29.799 07:18:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.799 07:18:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:29.799 07:18:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:30.732 [2024-07-13 07:19:00.169197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:30.732 [2024-07-13 07:19:00.169268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:30.732 [2024-07-13 07:19:00.169283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:30.732 [2024-07-13 07:19:00.169297] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:30.732 [2024-07-13 07:19:00.169327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:30.732 [2024-07-13 07:19:00.169365] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:30.732 [2024-07-13 07:19:00.169434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.732 [2024-07-13 07:19:00.169457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.732 [2024-07-13 07:19:00.169476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.732 [2024-07-13 07:19:00.169500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.732 [2024-07-13 07:19:00.169515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.732 [2024-07-13 07:19:00.169528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.732 [2024-07-13 07:19:00.169543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.732 [2024-07-13 07:19:00.169557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.732 [2024-07-13 07:19:00.169572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.732 [2024-07-13 07:19:00.169585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.732 [2024-07-13 07:19:00.169599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:30.732 [2024-07-13 07:19:00.169775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189210 (9): Bad file descriptor 00:31:30.732 [2024-07-13 07:19:00.170788] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:30.732 [2024-07-13 07:19:00.170810] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:30.732 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:30.991 07:19:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:31.925 07:19:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:31.925 07:19:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.925 07:19:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.925 07:19:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:31.925 07:19:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:31.925 07:19:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:31.925 07:19:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:31.925 07:19:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.925 07:19:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:31.925 07:19:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:32.865 [2024-07-13 07:19:02.231070] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:32.865 [2024-07-13 07:19:02.231111] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:32.865 [2024-07-13 07:19:02.231133] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:32.865 [2024-07-13 07:19:02.317423] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:33.125 07:19:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:33.125 07:19:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.125 07:19:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:33.125 07:19:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.125 07:19:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.125 07:19:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:33.125 07:19:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.125 07:19:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.125 07:19:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:33.125 07:19:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:33.125 [2024-07-13 07:19:02.501540] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:33.125 [2024-07-13 07:19:02.501590] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:33.125 [2024-07-13 07:19:02.501621] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:33.125 [2024-07-13 07:19:02.501642] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:33.125 [2024-07-13 07:19:02.501654] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:33.125 [2024-07-13 07:19:02.549582] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1197550 was disconnected and freed. delete nvme_qpair. 00:31:34.087 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.087 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.087 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1645402 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1645402 ']' 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1645402 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1645402 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1645402' 00:31:34.088 killing process with pid 1645402 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1645402 00:31:34.088 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1645402 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:34.345 rmmod nvme_tcp 00:31:34.345 rmmod nvme_fabrics 00:31:34.345 rmmod nvme_keyring 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1645357 ']' 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1645357 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1645357 ']' 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1645357 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1645357 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1645357' 00:31:34.345 killing process with pid 1645357 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1645357 00:31:34.345 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1645357 00:31:34.603 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:34.603 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:34.603 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:34.603 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:34.603 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:34.603 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.603 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:34.603 07:19:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.135 07:19:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:37.135 00:31:37.135 real 0m17.785s 00:31:37.135 user 0m25.933s 00:31:37.135 sys 0m2.979s 00:31:37.135 07:19:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:37.135 07:19:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.135 ************************************ 00:31:37.135 END TEST nvmf_discovery_remove_ifc 00:31:37.135 ************************************ 00:31:37.135 07:19:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:37.135 07:19:06 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:37.135 07:19:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:37.135 07:19:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:37.135 07:19:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:37.135 ************************************ 00:31:37.135 START TEST nvmf_identify_kernel_target 00:31:37.135 ************************************ 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:37.135 * Looking for test storage... 00:31:37.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:37.135 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:37.136 07:19:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:39.038 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:39.039 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:39.039 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:39.039 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:39.039 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:39.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:31:39.039 00:31:39.039 --- 10.0.0.2 ping statistics --- 00:31:39.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.039 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:31:39.039 00:31:39.039 --- 10.0.0.1 ping statistics --- 00:31:39.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.039 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:39.039 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:39.040 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:39.040 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:39.040 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:39.040 07:19:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:39.971 Waiting for block devices as requested 00:31:39.971 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:40.229 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:40.229 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:40.229 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:40.488 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:40.488 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:40.488 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:40.488 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:40.747 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:40.747 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:40.747 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:40.747 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:41.005 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:41.005 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:41.005 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:41.005 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:41.005 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:41.264 No valid GPT data, bailing 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:41.264 00:31:41.264 Discovery Log Number of Records 2, Generation counter 2 00:31:41.264 =====Discovery Log Entry 0====== 00:31:41.264 trtype: tcp 00:31:41.264 adrfam: ipv4 00:31:41.264 subtype: current discovery subsystem 00:31:41.264 treq: not specified, sq flow control disable supported 00:31:41.264 portid: 1 00:31:41.264 trsvcid: 4420 00:31:41.264 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:41.264 traddr: 10.0.0.1 00:31:41.264 eflags: none 00:31:41.264 sectype: none 00:31:41.264 =====Discovery Log Entry 1====== 00:31:41.264 trtype: tcp 00:31:41.264 adrfam: ipv4 00:31:41.264 subtype: nvme subsystem 00:31:41.264 treq: not specified, sq flow control disable supported 00:31:41.264 portid: 1 00:31:41.264 trsvcid: 4420 00:31:41.264 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:41.264 traddr: 10.0.0.1 00:31:41.264 eflags: none 00:31:41.264 sectype: none 00:31:41.264 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:41.264 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:41.524 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.524 ===================================================== 00:31:41.524 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:41.524 ===================================================== 00:31:41.524 Controller Capabilities/Features 00:31:41.524 ================================ 00:31:41.524 Vendor ID: 0000 00:31:41.524 Subsystem Vendor ID: 0000 00:31:41.524 Serial Number: d0b8f1e9e69073f5c188 00:31:41.524 Model Number: Linux 00:31:41.524 Firmware Version: 6.7.0-68 00:31:41.524 Recommended Arb Burst: 0 00:31:41.524 IEEE OUI Identifier: 00 00 00 00:31:41.524 Multi-path I/O 00:31:41.524 May have multiple subsystem ports: No 00:31:41.524 May have multiple controllers: No 00:31:41.524 Associated with SR-IOV VF: No 00:31:41.524 Max Data Transfer Size: Unlimited 00:31:41.524 Max Number of Namespaces: 0 00:31:41.524 Max Number of I/O Queues: 1024 00:31:41.524 NVMe Specification Version (VS): 1.3 00:31:41.524 NVMe Specification Version (Identify): 1.3 00:31:41.524 Maximum Queue Entries: 1024 00:31:41.524 Contiguous Queues Required: No 00:31:41.524 Arbitration Mechanisms Supported 00:31:41.524 Weighted Round Robin: Not Supported 00:31:41.524 Vendor Specific: Not Supported 00:31:41.524 Reset Timeout: 7500 ms 00:31:41.524 Doorbell Stride: 4 bytes 00:31:41.524 NVM Subsystem Reset: Not Supported 00:31:41.524 Command Sets Supported 00:31:41.524 NVM Command Set: Supported 00:31:41.524 Boot Partition: Not Supported 00:31:41.524 Memory Page Size Minimum: 4096 bytes 00:31:41.524 Memory Page Size Maximum: 4096 bytes 00:31:41.524 Persistent Memory Region: Not Supported 00:31:41.524 Optional Asynchronous Events Supported 00:31:41.524 Namespace Attribute Notices: Not Supported 00:31:41.524 Firmware Activation Notices: Not Supported 00:31:41.524 ANA Change Notices: Not Supported 00:31:41.524 PLE Aggregate Log Change Notices: Not Supported 00:31:41.524 LBA Status Info Alert Notices: Not Supported 00:31:41.524 EGE Aggregate Log Change Notices: Not Supported 00:31:41.524 Normal NVM Subsystem Shutdown event: Not Supported 00:31:41.524 Zone Descriptor Change Notices: Not Supported 00:31:41.524 Discovery Log Change Notices: Supported 00:31:41.524 Controller Attributes 00:31:41.524 128-bit Host Identifier: Not Supported 00:31:41.524 Non-Operational Permissive Mode: Not Supported 00:31:41.524 NVM Sets: Not Supported 00:31:41.524 Read Recovery Levels: Not Supported 00:31:41.524 Endurance Groups: Not Supported 00:31:41.524 Predictable Latency Mode: Not Supported 00:31:41.524 Traffic Based Keep ALive: Not Supported 00:31:41.524 Namespace Granularity: Not Supported 00:31:41.524 SQ Associations: Not Supported 00:31:41.524 UUID List: Not Supported 00:31:41.524 Multi-Domain Subsystem: Not Supported 00:31:41.524 Fixed Capacity Management: Not Supported 00:31:41.524 Variable Capacity Management: Not Supported 00:31:41.524 Delete Endurance Group: Not Supported 00:31:41.524 Delete NVM Set: Not Supported 00:31:41.524 Extended LBA Formats Supported: Not Supported 00:31:41.524 Flexible Data Placement Supported: Not Supported 00:31:41.524 00:31:41.524 Controller Memory Buffer Support 00:31:41.524 ================================ 00:31:41.524 Supported: No 00:31:41.524 00:31:41.524 Persistent Memory Region Support 00:31:41.524 ================================ 00:31:41.524 Supported: No 00:31:41.524 00:31:41.524 Admin Command Set Attributes 00:31:41.524 ============================ 00:31:41.524 Security Send/Receive: Not Supported 00:31:41.524 Format NVM: Not Supported 00:31:41.524 Firmware Activate/Download: Not Supported 00:31:41.524 Namespace Management: Not Supported 00:31:41.524 Device Self-Test: Not Supported 00:31:41.524 Directives: Not Supported 00:31:41.524 NVMe-MI: Not Supported 00:31:41.524 Virtualization Management: Not Supported 00:31:41.524 Doorbell Buffer Config: Not Supported 00:31:41.524 Get LBA Status Capability: Not Supported 00:31:41.524 Command & Feature Lockdown Capability: Not Supported 00:31:41.524 Abort Command Limit: 1 00:31:41.524 Async Event Request Limit: 1 00:31:41.524 Number of Firmware Slots: N/A 00:31:41.524 Firmware Slot 1 Read-Only: N/A 00:31:41.524 Firmware Activation Without Reset: N/A 00:31:41.524 Multiple Update Detection Support: N/A 00:31:41.524 Firmware Update Granularity: No Information Provided 00:31:41.524 Per-Namespace SMART Log: No 00:31:41.524 Asymmetric Namespace Access Log Page: Not Supported 00:31:41.524 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:41.524 Command Effects Log Page: Not Supported 00:31:41.524 Get Log Page Extended Data: Supported 00:31:41.524 Telemetry Log Pages: Not Supported 00:31:41.524 Persistent Event Log Pages: Not Supported 00:31:41.524 Supported Log Pages Log Page: May Support 00:31:41.524 Commands Supported & Effects Log Page: Not Supported 00:31:41.524 Feature Identifiers & Effects Log Page:May Support 00:31:41.524 NVMe-MI Commands & Effects Log Page: May Support 00:31:41.524 Data Area 4 for Telemetry Log: Not Supported 00:31:41.524 Error Log Page Entries Supported: 1 00:31:41.524 Keep Alive: Not Supported 00:31:41.524 00:31:41.524 NVM Command Set Attributes 00:31:41.524 ========================== 00:31:41.524 Submission Queue Entry Size 00:31:41.524 Max: 1 00:31:41.524 Min: 1 00:31:41.524 Completion Queue Entry Size 00:31:41.524 Max: 1 00:31:41.524 Min: 1 00:31:41.524 Number of Namespaces: 0 00:31:41.524 Compare Command: Not Supported 00:31:41.524 Write Uncorrectable Command: Not Supported 00:31:41.524 Dataset Management Command: Not Supported 00:31:41.524 Write Zeroes Command: Not Supported 00:31:41.524 Set Features Save Field: Not Supported 00:31:41.524 Reservations: Not Supported 00:31:41.524 Timestamp: Not Supported 00:31:41.524 Copy: Not Supported 00:31:41.524 Volatile Write Cache: Not Present 00:31:41.524 Atomic Write Unit (Normal): 1 00:31:41.524 Atomic Write Unit (PFail): 1 00:31:41.524 Atomic Compare & Write Unit: 1 00:31:41.524 Fused Compare & Write: Not Supported 00:31:41.524 Scatter-Gather List 00:31:41.524 SGL Command Set: Supported 00:31:41.524 SGL Keyed: Not Supported 00:31:41.524 SGL Bit Bucket Descriptor: Not Supported 00:31:41.524 SGL Metadata Pointer: Not Supported 00:31:41.524 Oversized SGL: Not Supported 00:31:41.524 SGL Metadata Address: Not Supported 00:31:41.524 SGL Offset: Supported 00:31:41.524 Transport SGL Data Block: Not Supported 00:31:41.524 Replay Protected Memory Block: Not Supported 00:31:41.524 00:31:41.524 Firmware Slot Information 00:31:41.524 ========================= 00:31:41.524 Active slot: 0 00:31:41.524 00:31:41.524 00:31:41.524 Error Log 00:31:41.524 ========= 00:31:41.524 00:31:41.524 Active Namespaces 00:31:41.524 ================= 00:31:41.524 Discovery Log Page 00:31:41.524 ================== 00:31:41.524 Generation Counter: 2 00:31:41.524 Number of Records: 2 00:31:41.524 Record Format: 0 00:31:41.524 00:31:41.524 Discovery Log Entry 0 00:31:41.524 ---------------------- 00:31:41.524 Transport Type: 3 (TCP) 00:31:41.524 Address Family: 1 (IPv4) 00:31:41.524 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:41.524 Entry Flags: 00:31:41.524 Duplicate Returned Information: 0 00:31:41.524 Explicit Persistent Connection Support for Discovery: 0 00:31:41.524 Transport Requirements: 00:31:41.524 Secure Channel: Not Specified 00:31:41.524 Port ID: 1 (0x0001) 00:31:41.524 Controller ID: 65535 (0xffff) 00:31:41.524 Admin Max SQ Size: 32 00:31:41.524 Transport Service Identifier: 4420 00:31:41.525 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:41.525 Transport Address: 10.0.0.1 00:31:41.525 Discovery Log Entry 1 00:31:41.525 ---------------------- 00:31:41.525 Transport Type: 3 (TCP) 00:31:41.525 Address Family: 1 (IPv4) 00:31:41.525 Subsystem Type: 2 (NVM Subsystem) 00:31:41.525 Entry Flags: 00:31:41.525 Duplicate Returned Information: 0 00:31:41.525 Explicit Persistent Connection Support for Discovery: 0 00:31:41.525 Transport Requirements: 00:31:41.525 Secure Channel: Not Specified 00:31:41.525 Port ID: 1 (0x0001) 00:31:41.525 Controller ID: 65535 (0xffff) 00:31:41.525 Admin Max SQ Size: 32 00:31:41.525 Transport Service Identifier: 4420 00:31:41.525 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:41.525 Transport Address: 10.0.0.1 00:31:41.525 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:41.525 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.525 get_feature(0x01) failed 00:31:41.525 get_feature(0x02) failed 00:31:41.525 get_feature(0x04) failed 00:31:41.525 ===================================================== 00:31:41.525 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:41.525 ===================================================== 00:31:41.525 Controller Capabilities/Features 00:31:41.525 ================================ 00:31:41.525 Vendor ID: 0000 00:31:41.525 Subsystem Vendor ID: 0000 00:31:41.525 Serial Number: 49dbfde65b42fc3773ca 00:31:41.525 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:41.525 Firmware Version: 6.7.0-68 00:31:41.525 Recommended Arb Burst: 6 00:31:41.525 IEEE OUI Identifier: 00 00 00 00:31:41.525 Multi-path I/O 00:31:41.525 May have multiple subsystem ports: Yes 00:31:41.525 May have multiple controllers: Yes 00:31:41.525 Associated with SR-IOV VF: No 00:31:41.525 Max Data Transfer Size: Unlimited 00:31:41.525 Max Number of Namespaces: 1024 00:31:41.525 Max Number of I/O Queues: 128 00:31:41.525 NVMe Specification Version (VS): 1.3 00:31:41.525 NVMe Specification Version (Identify): 1.3 00:31:41.525 Maximum Queue Entries: 1024 00:31:41.525 Contiguous Queues Required: No 00:31:41.525 Arbitration Mechanisms Supported 00:31:41.525 Weighted Round Robin: Not Supported 00:31:41.525 Vendor Specific: Not Supported 00:31:41.525 Reset Timeout: 7500 ms 00:31:41.525 Doorbell Stride: 4 bytes 00:31:41.525 NVM Subsystem Reset: Not Supported 00:31:41.525 Command Sets Supported 00:31:41.525 NVM Command Set: Supported 00:31:41.525 Boot Partition: Not Supported 00:31:41.525 Memory Page Size Minimum: 4096 bytes 00:31:41.525 Memory Page Size Maximum: 4096 bytes 00:31:41.525 Persistent Memory Region: Not Supported 00:31:41.525 Optional Asynchronous Events Supported 00:31:41.525 Namespace Attribute Notices: Supported 00:31:41.525 Firmware Activation Notices: Not Supported 00:31:41.525 ANA Change Notices: Supported 00:31:41.525 PLE Aggregate Log Change Notices: Not Supported 00:31:41.525 LBA Status Info Alert Notices: Not Supported 00:31:41.525 EGE Aggregate Log Change Notices: Not Supported 00:31:41.525 Normal NVM Subsystem Shutdown event: Not Supported 00:31:41.525 Zone Descriptor Change Notices: Not Supported 00:31:41.525 Discovery Log Change Notices: Not Supported 00:31:41.525 Controller Attributes 00:31:41.525 128-bit Host Identifier: Supported 00:31:41.525 Non-Operational Permissive Mode: Not Supported 00:31:41.525 NVM Sets: Not Supported 00:31:41.525 Read Recovery Levels: Not Supported 00:31:41.525 Endurance Groups: Not Supported 00:31:41.525 Predictable Latency Mode: Not Supported 00:31:41.525 Traffic Based Keep ALive: Supported 00:31:41.525 Namespace Granularity: Not Supported 00:31:41.525 SQ Associations: Not Supported 00:31:41.525 UUID List: Not Supported 00:31:41.525 Multi-Domain Subsystem: Not Supported 00:31:41.525 Fixed Capacity Management: Not Supported 00:31:41.525 Variable Capacity Management: Not Supported 00:31:41.525 Delete Endurance Group: Not Supported 00:31:41.525 Delete NVM Set: Not Supported 00:31:41.525 Extended LBA Formats Supported: Not Supported 00:31:41.525 Flexible Data Placement Supported: Not Supported 00:31:41.525 00:31:41.525 Controller Memory Buffer Support 00:31:41.525 ================================ 00:31:41.525 Supported: No 00:31:41.525 00:31:41.525 Persistent Memory Region Support 00:31:41.525 ================================ 00:31:41.525 Supported: No 00:31:41.525 00:31:41.525 Admin Command Set Attributes 00:31:41.525 ============================ 00:31:41.525 Security Send/Receive: Not Supported 00:31:41.525 Format NVM: Not Supported 00:31:41.525 Firmware Activate/Download: Not Supported 00:31:41.525 Namespace Management: Not Supported 00:31:41.525 Device Self-Test: Not Supported 00:31:41.525 Directives: Not Supported 00:31:41.525 NVMe-MI: Not Supported 00:31:41.525 Virtualization Management: Not Supported 00:31:41.525 Doorbell Buffer Config: Not Supported 00:31:41.525 Get LBA Status Capability: Not Supported 00:31:41.525 Command & Feature Lockdown Capability: Not Supported 00:31:41.525 Abort Command Limit: 4 00:31:41.525 Async Event Request Limit: 4 00:31:41.525 Number of Firmware Slots: N/A 00:31:41.525 Firmware Slot 1 Read-Only: N/A 00:31:41.525 Firmware Activation Without Reset: N/A 00:31:41.525 Multiple Update Detection Support: N/A 00:31:41.525 Firmware Update Granularity: No Information Provided 00:31:41.525 Per-Namespace SMART Log: Yes 00:31:41.525 Asymmetric Namespace Access Log Page: Supported 00:31:41.525 ANA Transition Time : 10 sec 00:31:41.525 00:31:41.525 Asymmetric Namespace Access Capabilities 00:31:41.525 ANA Optimized State : Supported 00:31:41.525 ANA Non-Optimized State : Supported 00:31:41.525 ANA Inaccessible State : Supported 00:31:41.525 ANA Persistent Loss State : Supported 00:31:41.525 ANA Change State : Supported 00:31:41.525 ANAGRPID is not changed : No 00:31:41.525 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:41.525 00:31:41.525 ANA Group Identifier Maximum : 128 00:31:41.525 Number of ANA Group Identifiers : 128 00:31:41.525 Max Number of Allowed Namespaces : 1024 00:31:41.525 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:41.525 Command Effects Log Page: Supported 00:31:41.525 Get Log Page Extended Data: Supported 00:31:41.525 Telemetry Log Pages: Not Supported 00:31:41.525 Persistent Event Log Pages: Not Supported 00:31:41.525 Supported Log Pages Log Page: May Support 00:31:41.525 Commands Supported & Effects Log Page: Not Supported 00:31:41.525 Feature Identifiers & Effects Log Page:May Support 00:31:41.525 NVMe-MI Commands & Effects Log Page: May Support 00:31:41.525 Data Area 4 for Telemetry Log: Not Supported 00:31:41.525 Error Log Page Entries Supported: 128 00:31:41.525 Keep Alive: Supported 00:31:41.525 Keep Alive Granularity: 1000 ms 00:31:41.525 00:31:41.525 NVM Command Set Attributes 00:31:41.525 ========================== 00:31:41.525 Submission Queue Entry Size 00:31:41.525 Max: 64 00:31:41.525 Min: 64 00:31:41.525 Completion Queue Entry Size 00:31:41.525 Max: 16 00:31:41.525 Min: 16 00:31:41.525 Number of Namespaces: 1024 00:31:41.525 Compare Command: Not Supported 00:31:41.525 Write Uncorrectable Command: Not Supported 00:31:41.525 Dataset Management Command: Supported 00:31:41.525 Write Zeroes Command: Supported 00:31:41.525 Set Features Save Field: Not Supported 00:31:41.525 Reservations: Not Supported 00:31:41.525 Timestamp: Not Supported 00:31:41.525 Copy: Not Supported 00:31:41.525 Volatile Write Cache: Present 00:31:41.525 Atomic Write Unit (Normal): 1 00:31:41.525 Atomic Write Unit (PFail): 1 00:31:41.525 Atomic Compare & Write Unit: 1 00:31:41.525 Fused Compare & Write: Not Supported 00:31:41.525 Scatter-Gather List 00:31:41.525 SGL Command Set: Supported 00:31:41.525 SGL Keyed: Not Supported 00:31:41.525 SGL Bit Bucket Descriptor: Not Supported 00:31:41.525 SGL Metadata Pointer: Not Supported 00:31:41.525 Oversized SGL: Not Supported 00:31:41.525 SGL Metadata Address: Not Supported 00:31:41.525 SGL Offset: Supported 00:31:41.525 Transport SGL Data Block: Not Supported 00:31:41.525 Replay Protected Memory Block: Not Supported 00:31:41.525 00:31:41.525 Firmware Slot Information 00:31:41.525 ========================= 00:31:41.525 Active slot: 0 00:31:41.525 00:31:41.525 Asymmetric Namespace Access 00:31:41.525 =========================== 00:31:41.525 Change Count : 0 00:31:41.525 Number of ANA Group Descriptors : 1 00:31:41.525 ANA Group Descriptor : 0 00:31:41.525 ANA Group ID : 1 00:31:41.525 Number of NSID Values : 1 00:31:41.525 Change Count : 0 00:31:41.525 ANA State : 1 00:31:41.525 Namespace Identifier : 1 00:31:41.525 00:31:41.525 Commands Supported and Effects 00:31:41.525 ============================== 00:31:41.525 Admin Commands 00:31:41.525 -------------- 00:31:41.525 Get Log Page (02h): Supported 00:31:41.525 Identify (06h): Supported 00:31:41.525 Abort (08h): Supported 00:31:41.525 Set Features (09h): Supported 00:31:41.525 Get Features (0Ah): Supported 00:31:41.525 Asynchronous Event Request (0Ch): Supported 00:31:41.526 Keep Alive (18h): Supported 00:31:41.526 I/O Commands 00:31:41.526 ------------ 00:31:41.526 Flush (00h): Supported 00:31:41.526 Write (01h): Supported LBA-Change 00:31:41.526 Read (02h): Supported 00:31:41.526 Write Zeroes (08h): Supported LBA-Change 00:31:41.526 Dataset Management (09h): Supported 00:31:41.526 00:31:41.526 Error Log 00:31:41.526 ========= 00:31:41.526 Entry: 0 00:31:41.526 Error Count: 0x3 00:31:41.526 Submission Queue Id: 0x0 00:31:41.526 Command Id: 0x5 00:31:41.526 Phase Bit: 0 00:31:41.526 Status Code: 0x2 00:31:41.526 Status Code Type: 0x0 00:31:41.526 Do Not Retry: 1 00:31:41.526 Error Location: 0x28 00:31:41.526 LBA: 0x0 00:31:41.526 Namespace: 0x0 00:31:41.526 Vendor Log Page: 0x0 00:31:41.526 ----------- 00:31:41.526 Entry: 1 00:31:41.526 Error Count: 0x2 00:31:41.526 Submission Queue Id: 0x0 00:31:41.526 Command Id: 0x5 00:31:41.526 Phase Bit: 0 00:31:41.526 Status Code: 0x2 00:31:41.526 Status Code Type: 0x0 00:31:41.526 Do Not Retry: 1 00:31:41.526 Error Location: 0x28 00:31:41.526 LBA: 0x0 00:31:41.526 Namespace: 0x0 00:31:41.526 Vendor Log Page: 0x0 00:31:41.526 ----------- 00:31:41.526 Entry: 2 00:31:41.526 Error Count: 0x1 00:31:41.526 Submission Queue Id: 0x0 00:31:41.526 Command Id: 0x4 00:31:41.526 Phase Bit: 0 00:31:41.526 Status Code: 0x2 00:31:41.526 Status Code Type: 0x0 00:31:41.526 Do Not Retry: 1 00:31:41.526 Error Location: 0x28 00:31:41.526 LBA: 0x0 00:31:41.526 Namespace: 0x0 00:31:41.526 Vendor Log Page: 0x0 00:31:41.526 00:31:41.526 Number of Queues 00:31:41.526 ================ 00:31:41.526 Number of I/O Submission Queues: 128 00:31:41.526 Number of I/O Completion Queues: 128 00:31:41.526 00:31:41.526 ZNS Specific Controller Data 00:31:41.526 ============================ 00:31:41.526 Zone Append Size Limit: 0 00:31:41.526 00:31:41.526 00:31:41.526 Active Namespaces 00:31:41.526 ================= 00:31:41.526 get_feature(0x05) failed 00:31:41.526 Namespace ID:1 00:31:41.526 Command Set Identifier: NVM (00h) 00:31:41.526 Deallocate: Supported 00:31:41.526 Deallocated/Unwritten Error: Not Supported 00:31:41.526 Deallocated Read Value: Unknown 00:31:41.526 Deallocate in Write Zeroes: Not Supported 00:31:41.526 Deallocated Guard Field: 0xFFFF 00:31:41.526 Flush: Supported 00:31:41.526 Reservation: Not Supported 00:31:41.526 Namespace Sharing Capabilities: Multiple Controllers 00:31:41.526 Size (in LBAs): 1953525168 (931GiB) 00:31:41.526 Capacity (in LBAs): 1953525168 (931GiB) 00:31:41.526 Utilization (in LBAs): 1953525168 (931GiB) 00:31:41.526 UUID: fa05334f-2a65-4244-9d10-3f560cf739b4 00:31:41.526 Thin Provisioning: Not Supported 00:31:41.526 Per-NS Atomic Units: Yes 00:31:41.526 Atomic Boundary Size (Normal): 0 00:31:41.526 Atomic Boundary Size (PFail): 0 00:31:41.526 Atomic Boundary Offset: 0 00:31:41.526 NGUID/EUI64 Never Reused: No 00:31:41.526 ANA group ID: 1 00:31:41.526 Namespace Write Protected: No 00:31:41.526 Number of LBA Formats: 1 00:31:41.526 Current LBA Format: LBA Format #00 00:31:41.526 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:41.526 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:41.526 rmmod nvme_tcp 00:31:41.526 rmmod nvme_fabrics 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:41.526 07:19:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.058 07:19:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:44.058 07:19:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:44.058 07:19:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:44.058 07:19:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:44.058 07:19:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:44.058 07:19:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:44.058 07:19:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:44.058 07:19:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:44.058 07:19:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:44.058 07:19:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:44.058 07:19:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:44.989 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:44.989 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:44.989 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:44.989 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:44.989 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:44.989 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:44.989 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:44.989 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:44.989 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:44.989 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:44.989 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:44.989 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:44.989 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:44.989 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:44.989 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:44.989 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:45.922 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:46.180 00:31:46.180 real 0m9.307s 00:31:46.180 user 0m2.016s 00:31:46.180 sys 0m3.380s 00:31:46.180 07:19:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:46.180 07:19:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:46.180 ************************************ 00:31:46.180 END TEST nvmf_identify_kernel_target 00:31:46.180 ************************************ 00:31:46.180 07:19:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:46.180 07:19:15 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:46.180 07:19:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:46.180 07:19:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:46.180 07:19:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:46.180 ************************************ 00:31:46.180 START TEST nvmf_auth_host 00:31:46.180 ************************************ 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:46.180 * Looking for test storage... 00:31:46.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.180 07:19:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:46.181 07:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.079 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:48.080 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:48.080 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:48.080 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:48.080 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:48.080 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:48.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:31:48.338 00:31:48.338 --- 10.0.0.2 ping statistics --- 00:31:48.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.338 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:31:48.338 00:31:48.338 --- 10.0.0.1 ping statistics --- 00:31:48.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.338 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1652564 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1652564 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1652564 ']' 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:48.338 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.596 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:48.596 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:48.596 07:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:48.596 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:48.596 07:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=78a7dd185f970079b0a877198f1d1db4 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NRX 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 78a7dd185f970079b0a877198f1d1db4 0 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 78a7dd185f970079b0a877198f1d1db4 0 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=78a7dd185f970079b0a877198f1d1db4 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:48.596 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NRX 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NRX 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.NRX 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c63d584ad0fcf05d2c247d2d07a4867bb0807f1190aa6093344342aaf00659cd 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XF8 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c63d584ad0fcf05d2c247d2d07a4867bb0807f1190aa6093344342aaf00659cd 3 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c63d584ad0fcf05d2c247d2d07a4867bb0807f1190aa6093344342aaf00659cd 3 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c63d584ad0fcf05d2c247d2d07a4867bb0807f1190aa6093344342aaf00659cd 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XF8 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XF8 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XF8 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=034617e3a946a48c5567f47b0451f06b1d4cdc96665ff62d 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1QZ 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 034617e3a946a48c5567f47b0451f06b1d4cdc96665ff62d 0 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 034617e3a946a48c5567f47b0451f06b1d4cdc96665ff62d 0 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=034617e3a946a48c5567f47b0451f06b1d4cdc96665ff62d 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1QZ 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1QZ 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.1QZ 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=469860e2c8319be62cf3229005708932714f178546bcbc63 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lVa 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 469860e2c8319be62cf3229005708932714f178546bcbc63 2 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 469860e2c8319be62cf3229005708932714f178546bcbc63 2 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=469860e2c8319be62cf3229005708932714f178546bcbc63 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lVa 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lVa 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.lVa 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ac3bad8294f3994ef06b95b5deba4dca 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.a4X 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ac3bad8294f3994ef06b95b5deba4dca 1 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ac3bad8294f3994ef06b95b5deba4dca 1 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ac3bad8294f3994ef06b95b5deba4dca 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.a4X 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.a4X 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.a4X 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a6e6091a60d281ab0f582b27844d580b 00:31:48.854 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.W6l 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a6e6091a60d281ab0f582b27844d580b 1 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a6e6091a60d281ab0f582b27844d580b 1 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a6e6091a60d281ab0f582b27844d580b 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.W6l 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.W6l 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.W6l 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b66752f7d6a1c804940d638fddd8a87a6125f347945d0987 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3Uf 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b66752f7d6a1c804940d638fddd8a87a6125f347945d0987 2 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b66752f7d6a1c804940d638fddd8a87a6125f347945d0987 2 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b66752f7d6a1c804940d638fddd8a87a6125f347945d0987 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3Uf 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3Uf 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.3Uf 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d4181027b9706124965951b26f217049 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.0NF 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d4181027b9706124965951b26f217049 0 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d4181027b9706124965951b26f217049 0 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d4181027b9706124965951b26f217049 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.0NF 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.0NF 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.0NF 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fc5e3d7208586130dd4957d9c6fb794e4aab1e82f636e2e7d856291e982f6c6d 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XUg 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fc5e3d7208586130dd4957d9c6fb794e4aab1e82f636e2e7d856291e982f6c6d 3 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fc5e3d7208586130dd4957d9c6fb794e4aab1e82f636e2e7d856291e982f6c6d 3 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fc5e3d7208586130dd4957d9c6fb794e4aab1e82f636e2e7d856291e982f6c6d 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XUg 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XUg 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.XUg 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1652564 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1652564 ']' 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:49.113 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NRX 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XF8 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XF8 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.1QZ 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.lVa ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lVa 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.a4X 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.W6l ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.W6l 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.3Uf 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.0NF ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.0NF 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.XUg 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:49.372 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:49.657 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:49.657 07:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:50.590 Waiting for block devices as requested 00:31:50.590 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:50.590 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:50.590 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:50.848 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:50.848 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:50.848 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:51.106 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:51.106 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:51.106 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:51.364 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:51.364 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:51.364 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:51.364 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:51.623 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:51.623 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:51.623 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:51.881 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:52.139 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:52.139 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:52.139 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:52.139 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:52.139 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:52.139 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:52.139 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:52.139 07:19:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:52.139 07:19:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:52.139 No valid GPT data, bailing 00:31:52.396 07:19:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:52.396 07:19:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:52.396 07:19:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:52.396 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:52.396 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:52.397 00:31:52.397 Discovery Log Number of Records 2, Generation counter 2 00:31:52.397 =====Discovery Log Entry 0====== 00:31:52.397 trtype: tcp 00:31:52.397 adrfam: ipv4 00:31:52.397 subtype: current discovery subsystem 00:31:52.397 treq: not specified, sq flow control disable supported 00:31:52.397 portid: 1 00:31:52.397 trsvcid: 4420 00:31:52.397 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:52.397 traddr: 10.0.0.1 00:31:52.397 eflags: none 00:31:52.397 sectype: none 00:31:52.397 =====Discovery Log Entry 1====== 00:31:52.397 trtype: tcp 00:31:52.397 adrfam: ipv4 00:31:52.397 subtype: nvme subsystem 00:31:52.397 treq: not specified, sq flow control disable supported 00:31:52.397 portid: 1 00:31:52.397 trsvcid: 4420 00:31:52.397 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:52.397 traddr: 10.0.0.1 00:31:52.397 eflags: none 00:31:52.397 sectype: none 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.397 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.655 nvme0n1 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.655 07:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.913 nvme0n1 00:31:52.913 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.913 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.913 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.913 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.914 nvme0n1 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.914 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.172 nvme0n1 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.172 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.430 nvme0n1 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.430 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.431 07:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.688 nvme0n1 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.688 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.946 nvme0n1 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.946 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.204 nvme0n1 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.204 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.462 nvme0n1 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.462 07:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.719 nvme0n1 00:31:54.719 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.719 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.719 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.719 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.719 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.719 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.720 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.978 nvme0n1 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.978 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.545 nvme0n1 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.545 07:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.546 07:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.546 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.546 07:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.804 nvme0n1 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.804 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.063 nvme0n1 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.063 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.321 nvme0n1 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.580 07:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.839 nvme0n1 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.839 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.405 nvme0n1 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.405 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.406 07:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.972 nvme0n1 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.972 07:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.973 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:57.973 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.973 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.538 nvme0n1 00:31:58.538 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.538 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.538 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.538 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.538 07:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.538 07:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.796 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.363 nvme0n1 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.363 07:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.930 nvme0n1 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.930 07:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.931 07:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.931 07:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.931 07:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.931 07:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.931 07:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:59.931 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.931 07:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.869 nvme0n1 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.869 07:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.870 07:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.834 nvme0n1 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.834 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.835 07:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.776 nvme0n1 00:32:02.776 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.776 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.776 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.776 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.776 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.034 07:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.965 nvme0n1 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:03.965 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.966 07:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.895 nvme0n1 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:04.895 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:04.896 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.153 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.153 nvme0n1 00:32:05.153 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.153 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.153 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.154 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.411 nvme0n1 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.411 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.412 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.670 nvme0n1 00:32:05.670 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.670 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.670 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.670 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.670 07:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.670 07:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.670 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.671 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.929 nvme0n1 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.929 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.188 nvme0n1 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.188 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.447 nvme0n1 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.447 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.705 nvme0n1 00:32:06.705 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.705 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.705 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.705 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.705 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.705 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.705 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.705 07:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.705 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.705 07:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.705 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.706 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.963 nvme0n1 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.963 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.964 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.222 nvme0n1 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.222 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 nvme0n1 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.480 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.481 07:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.738 nvme0n1 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.738 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.996 nvme0n1 00:32:07.996 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.996 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.996 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.996 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.996 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.255 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.513 nvme0n1 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.513 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.514 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.514 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.514 07:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.514 07:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:08.514 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.514 07:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.772 nvme0n1 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.772 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.773 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.031 nvme0n1 00:32:09.031 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.031 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.031 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.031 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.031 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.031 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:09.288 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.289 07:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.852 nvme0n1 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:09.852 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.853 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.416 nvme0n1 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.416 07:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.982 nvme0n1 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.982 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.983 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.548 nvme0n1 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.548 07:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.548 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.805 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.806 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.371 nvme0n1 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.371 07:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.304 nvme0n1 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.304 07:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.243 nvme0n1 00:32:14.243 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.243 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.243 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.243 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.243 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.243 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.243 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.243 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.243 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.500 07:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.431 nvme0n1 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.431 07:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.364 nvme0n1 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:16.364 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.365 07:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.298 nvme0n1 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:17.298 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.299 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.556 nvme0n1 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.556 07:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.813 nvme0n1 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:17.813 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.814 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.071 nvme0n1 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.071 nvme0n1 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.071 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.328 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.328 nvme0n1 00:32:18.329 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.329 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.329 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.329 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.329 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.329 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.586 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.586 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.586 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.586 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.586 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.586 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.587 07:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.587 nvme0n1 00:32:18.587 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.587 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.587 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.587 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.587 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.587 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.846 nvme0n1 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.846 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:19.104 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.105 nvme0n1 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.105 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.363 nvme0n1 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.363 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.621 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.621 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.622 07:19:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.622 nvme0n1 00:32:19.622 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.622 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.622 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.622 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.622 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.622 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.622 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.622 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.880 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.138 nvme0n1 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.138 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.396 nvme0n1 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.396 07:19:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.961 nvme0n1 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:20.961 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.962 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.219 nvme0n1 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.219 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.478 nvme0n1 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.478 07:19:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.042 nvme0n1 00:32:22.042 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.042 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.042 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.042 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.042 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.042 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.300 07:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.301 07:19:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.866 nvme0n1 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:22.866 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.867 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.433 nvme0n1 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:23.433 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.434 07:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.996 nvme0n1 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.996 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.559 nvme0n1 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:24.559 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhhN2RkMTg1Zjk3MDA3OWIwYTg3NzE5OGYxZDFkYjQV1LvR: 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: ]] 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzYzZDU4NGFkMGZjZjA1ZDJjMjQ3ZDJkMDdhNDg2N2JiMDgwN2YxMTkwYWE2MDkzMzQ0MzQyYWFmMDA2NTljZNOGIGs=: 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.560 07:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.491 nvme0n1 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.491 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.749 07:19:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.682 nvme0n1 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWMzYmFkODI5NGYzOTk0ZWYwNmI5NWI1ZGViYTRkY2E4+MCg: 00:32:26.682 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: ]] 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZlNjA5MWE2MGQyODFhYjBmNTgyYjI3ODQ0ZDU4MGLfRHjx: 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.683 07:19:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.683 07:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.615 nvme0n1 00:32:27.615 07:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.615 07:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.615 07:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.615 07:19:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.615 07:19:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjY2NzUyZjdkNmExYzgwNDk0MGQ2MzhmZGRkOGE4N2E2MTI1ZjM0Nzk0NWQwOTg35SiQ0Q==: 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: ]] 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxODEwMjdiOTcwNjEyNDk2NTk1MWIyNmYyMTcwNDlVbWLR: 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.615 07:19:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.984 nvme0n1 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM1ZTNkNzIwODU4NjEzMGRkNDk1N2Q5YzZmYjc5NGU0YWFiMWU4MmY2MzZlMmU3ZDg1NjI5MWU5ODJmNmM2ZGWfTlM=: 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.984 07:19:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.918 nvme0n1 00:32:29.918 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.918 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.918 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.918 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.918 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.918 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.918 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.918 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.918 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.918 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM0NjE3ZTNhOTQ2YTQ4YzU1NjdmNDdiMDQ1MWYwNmIxZDRjZGM5NjY2NWZmNjJkhnJdGA==: 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDY5ODYwZTJjODMxOWJlNjJjZjMyMjkwMDU3MDg5MzI3MTRmMTc4NTQ2YmNiYzYzNzL8Lw==: 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.919 request: 00:32:29.919 { 00:32:29.919 "name": "nvme0", 00:32:29.919 "trtype": "tcp", 00:32:29.919 "traddr": "10.0.0.1", 00:32:29.919 "adrfam": "ipv4", 00:32:29.919 "trsvcid": "4420", 00:32:29.919 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:29.919 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:29.919 "prchk_reftag": false, 00:32:29.919 "prchk_guard": false, 00:32:29.919 "hdgst": false, 00:32:29.919 "ddgst": false, 00:32:29.919 "method": "bdev_nvme_attach_controller", 00:32:29.919 "req_id": 1 00:32:29.919 } 00:32:29.919 Got JSON-RPC error response 00:32:29.919 response: 00:32:29.919 { 00:32:29.919 "code": -5, 00:32:29.919 "message": "Input/output error" 00:32:29.919 } 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.919 request: 00:32:29.919 { 00:32:29.919 "name": "nvme0", 00:32:29.919 "trtype": "tcp", 00:32:29.919 "traddr": "10.0.0.1", 00:32:29.919 "adrfam": "ipv4", 00:32:29.919 "trsvcid": "4420", 00:32:29.919 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:29.919 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:29.919 "prchk_reftag": false, 00:32:29.919 "prchk_guard": false, 00:32:29.919 "hdgst": false, 00:32:29.919 "ddgst": false, 00:32:29.919 "dhchap_key": "key2", 00:32:29.919 "method": "bdev_nvme_attach_controller", 00:32:29.919 "req_id": 1 00:32:29.919 } 00:32:29.919 Got JSON-RPC error response 00:32:29.919 response: 00:32:29.919 { 00:32:29.919 "code": -5, 00:32:29.919 "message": "Input/output error" 00:32:29.919 } 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.919 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.178 request: 00:32:30.178 { 00:32:30.178 "name": "nvme0", 00:32:30.178 "trtype": "tcp", 00:32:30.178 "traddr": "10.0.0.1", 00:32:30.178 "adrfam": "ipv4", 00:32:30.178 "trsvcid": "4420", 00:32:30.178 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:30.178 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:30.178 "prchk_reftag": false, 00:32:30.178 "prchk_guard": false, 00:32:30.178 "hdgst": false, 00:32:30.178 "ddgst": false, 00:32:30.178 "dhchap_key": "key1", 00:32:30.178 "dhchap_ctrlr_key": "ckey2", 00:32:30.178 "method": "bdev_nvme_attach_controller", 00:32:30.178 "req_id": 1 00:32:30.178 } 00:32:30.178 Got JSON-RPC error response 00:32:30.178 response: 00:32:30.178 { 00:32:30.178 "code": -5, 00:32:30.178 "message": "Input/output error" 00:32:30.178 } 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:30.178 rmmod nvme_tcp 00:32:30.178 rmmod nvme_fabrics 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1652564 ']' 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1652564 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1652564 ']' 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1652564 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1652564 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1652564' 00:32:30.178 killing process with pid 1652564 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1652564 00:32:30.178 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1652564 00:32:30.437 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:30.437 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:30.437 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:30.437 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:30.437 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:30.437 07:19:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.437 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:30.437 07:19:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:32.339 07:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:32.596 07:20:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:33.967 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:33.967 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:33.967 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:33.967 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:33.967 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:33.967 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:33.967 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:33.967 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:33.967 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:33.967 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:33.967 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:33.967 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:33.967 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:33.967 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:33.967 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:33.967 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:34.900 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:34.900 07:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.NRX /tmp/spdk.key-null.1QZ /tmp/spdk.key-sha256.a4X /tmp/spdk.key-sha384.3Uf /tmp/spdk.key-sha512.XUg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:34.900 07:20:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:36.272 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:36.272 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:36.272 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:36.272 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:36.272 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:36.272 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:36.272 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:36.272 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:36.272 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:36.272 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:36.272 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:36.272 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:36.272 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:36.272 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:36.272 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:36.272 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:36.272 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:36.272 00:32:36.273 real 0m50.160s 00:32:36.273 user 0m47.658s 00:32:36.273 sys 0m5.924s 00:32:36.273 07:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:36.273 07:20:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.273 ************************************ 00:32:36.273 END TEST nvmf_auth_host 00:32:36.273 ************************************ 00:32:36.273 07:20:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:36.273 07:20:05 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:32:36.273 07:20:05 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:36.273 07:20:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:36.273 07:20:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:36.273 07:20:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:36.273 ************************************ 00:32:36.273 START TEST nvmf_digest 00:32:36.273 ************************************ 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:36.273 * Looking for test storage... 00:32:36.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:36.273 07:20:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:38.802 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:38.802 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:38.802 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:38.802 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:38.802 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:38.802 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:38.802 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:38.802 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:38.802 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:38.802 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:38.803 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:38.803 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:38.803 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:38.803 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:38.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:32:38.803 00:32:38.803 --- 10.0.0.2 ping statistics --- 00:32:38.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.803 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:32:38.803 00:32:38.803 --- 10.0.0.1 ping statistics --- 00:32:38.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.803 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:38.803 ************************************ 00:32:38.803 START TEST nvmf_digest_clean 00:32:38.803 ************************************ 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1662015 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1662015 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1662015 ']' 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:38.803 07:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:38.804 [2024-07-13 07:20:07.920962] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:38.804 [2024-07-13 07:20:07.921034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.804 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.804 [2024-07-13 07:20:07.958480] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:38.804 [2024-07-13 07:20:07.984689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.804 [2024-07-13 07:20:08.067825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.804 [2024-07-13 07:20:08.067908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.804 [2024-07-13 07:20:08.067932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.804 [2024-07-13 07:20:08.067943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.804 [2024-07-13 07:20:08.067953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.804 [2024-07-13 07:20:08.067979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.804 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:38.804 null0 00:32:38.804 [2024-07-13 07:20:08.254583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.062 [2024-07-13 07:20:08.278763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1662034 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1662034 /var/tmp/bperf.sock 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1662034 ']' 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:39.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:39.063 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:39.063 [2024-07-13 07:20:08.326344] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:39.063 [2024-07-13 07:20:08.326431] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662034 ] 00:32:39.063 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.063 [2024-07-13 07:20:08.358393] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:39.063 [2024-07-13 07:20:08.388293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.063 [2024-07-13 07:20:08.480853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.322 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:39.322 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:39.322 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:39.322 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:39.322 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:39.581 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:39.581 07:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:39.839 nvme0n1 00:32:39.839 07:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:39.839 07:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:39.839 Running I/O for 2 seconds... 00:32:42.369 00:32:42.369 Latency(us) 00:32:42.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.369 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:42.369 nvme0n1 : 2.01 18726.01 73.15 0.00 0.00 6825.75 3737.98 13883.92 00:32:42.369 =================================================================================================================== 00:32:42.369 Total : 18726.01 73.15 0.00 0.00 6825.75 3737.98 13883.92 00:32:42.369 0 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:42.369 | select(.opcode=="crc32c") 00:32:42.369 | "\(.module_name) \(.executed)"' 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1662034 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1662034 ']' 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1662034 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1662034 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1662034' 00:32:42.369 killing process with pid 1662034 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1662034 00:32:42.369 Received shutdown signal, test time was about 2.000000 seconds 00:32:42.369 00:32:42.369 Latency(us) 00:32:42.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.369 =================================================================================================================== 00:32:42.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:42.369 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1662034 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1662445 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1662445 /var/tmp/bperf.sock 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1662445 ']' 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:42.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:42.628 07:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:42.628 [2024-07-13 07:20:11.874315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:42.628 [2024-07-13 07:20:11.874398] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662445 ] 00:32:42.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:42.628 Zero copy mechanism will not be used. 00:32:42.628 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.628 [2024-07-13 07:20:11.905719] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:42.628 [2024-07-13 07:20:11.932210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.628 [2024-07-13 07:20:12.022039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.887 07:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:42.887 07:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:42.887 07:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:42.887 07:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:42.887 07:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:43.145 07:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:43.145 07:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:43.411 nvme0n1 00:32:43.411 07:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:43.411 07:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:43.669 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:43.669 Zero copy mechanism will not be used. 00:32:43.669 Running I/O for 2 seconds... 00:32:45.566 00:32:45.566 Latency(us) 00:32:45.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.566 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:45.566 nvme0n1 : 2.00 3334.02 416.75 0.00 0.00 4794.70 1268.24 13398.47 00:32:45.566 =================================================================================================================== 00:32:45.566 Total : 3334.02 416.75 0.00 0.00 4794.70 1268.24 13398.47 00:32:45.566 0 00:32:45.566 07:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:45.566 07:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:45.566 07:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:45.566 07:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:45.566 | select(.opcode=="crc32c") 00:32:45.566 | "\(.module_name) \(.executed)"' 00:32:45.566 07:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1662445 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1662445 ']' 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1662445 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1662445 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1662445' 00:32:45.824 killing process with pid 1662445 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1662445 00:32:45.824 Received shutdown signal, test time was about 2.000000 seconds 00:32:45.824 00:32:45.824 Latency(us) 00:32:45.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.824 =================================================================================================================== 00:32:45.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.824 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1662445 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1662943 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1662943 /var/tmp/bperf.sock 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1662943 ']' 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:46.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.081 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:46.081 [2024-07-13 07:20:15.530035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:46.081 [2024-07-13 07:20:15.530121] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662943 ] 00:32:46.338 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.338 [2024-07-13 07:20:15.561972] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:46.338 [2024-07-13 07:20:15.593552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.338 [2024-07-13 07:20:15.684377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.338 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:46.338 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:46.338 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:46.338 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:46.338 07:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:46.903 07:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:46.903 07:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:47.161 nvme0n1 00:32:47.161 07:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:47.161 07:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:47.419 Running I/O for 2 seconds... 00:32:49.316 00:32:49.316 Latency(us) 00:32:49.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.316 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:49.316 nvme0n1 : 2.01 20383.46 79.62 0.00 0.00 6269.83 3228.25 15340.28 00:32:49.316 =================================================================================================================== 00:32:49.316 Total : 20383.46 79.62 0.00 0.00 6269.83 3228.25 15340.28 00:32:49.316 0 00:32:49.316 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:49.316 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:49.316 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:49.316 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:49.316 | select(.opcode=="crc32c") 00:32:49.316 | "\(.module_name) \(.executed)"' 00:32:49.316 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1662943 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1662943 ']' 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1662943 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1662943 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1662943' 00:32:49.575 killing process with pid 1662943 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1662943 00:32:49.575 Received shutdown signal, test time was about 2.000000 seconds 00:32:49.575 00:32:49.575 Latency(us) 00:32:49.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.575 =================================================================================================================== 00:32:49.575 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:49.575 07:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1662943 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1663376 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1663376 /var/tmp/bperf.sock 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1663376 ']' 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:49.833 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:49.833 [2024-07-13 07:20:19.278387] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:49.833 [2024-07-13 07:20:19.278471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663376 ] 00:32:49.833 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:49.833 Zero copy mechanism will not be used. 00:32:50.091 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.091 [2024-07-13 07:20:19.310603] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:50.091 [2024-07-13 07:20:19.343294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.091 [2024-07-13 07:20:19.439801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.091 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:50.091 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:50.091 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:50.091 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:50.091 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:50.656 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:50.656 07:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:50.913 nvme0n1 00:32:50.913 07:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:50.913 07:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:51.171 Zero copy mechanism will not be used. 00:32:51.171 Running I/O for 2 seconds... 00:32:53.071 00:32:53.071 Latency(us) 00:32:53.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.071 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:53.071 nvme0n1 : 2.01 3110.67 388.83 0.00 0.00 5131.51 3640.89 13010.11 00:32:53.071 =================================================================================================================== 00:32:53.071 Total : 3110.67 388.83 0.00 0.00 5131.51 3640.89 13010.11 00:32:53.071 0 00:32:53.071 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:53.071 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:53.071 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:53.071 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:53.071 | select(.opcode=="crc32c") 00:32:53.071 | "\(.module_name) \(.executed)"' 00:32:53.071 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1663376 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1663376 ']' 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1663376 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1663376 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1663376' 00:32:53.329 killing process with pid 1663376 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1663376 00:32:53.329 Received shutdown signal, test time was about 2.000000 seconds 00:32:53.329 00:32:53.329 Latency(us) 00:32:53.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.329 =================================================================================================================== 00:32:53.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:53.329 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1663376 00:32:53.596 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1662015 00:32:53.596 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1662015 ']' 00:32:53.596 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1662015 00:32:53.596 07:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:53.596 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:53.596 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1662015 00:32:53.596 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:53.596 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:53.596 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1662015' 00:32:53.596 killing process with pid 1662015 00:32:53.596 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1662015 00:32:53.596 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1662015 00:32:53.868 00:32:53.868 real 0m15.402s 00:32:53.868 user 0m30.790s 00:32:53.868 sys 0m3.962s 00:32:53.868 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:53.868 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:53.868 ************************************ 00:32:53.868 END TEST nvmf_digest_clean 00:32:53.868 ************************************ 00:32:53.868 07:20:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:32:53.868 07:20:23 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:53.868 07:20:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:53.868 07:20:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:53.868 07:20:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:54.125 ************************************ 00:32:54.125 START TEST nvmf_digest_error 00:32:54.125 ************************************ 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1663811 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1663811 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1663811 ']' 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:54.125 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.125 [2024-07-13 07:20:23.383444] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:54.125 [2024-07-13 07:20:23.383520] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.125 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.125 [2024-07-13 07:20:23.421115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:54.125 [2024-07-13 07:20:23.454244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.125 [2024-07-13 07:20:23.552365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.125 [2024-07-13 07:20:23.552425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.125 [2024-07-13 07:20:23.552441] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:54.125 [2024-07-13 07:20:23.552455] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:54.125 [2024-07-13 07:20:23.552468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.125 [2024-07-13 07:20:23.552499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.383 [2024-07-13 07:20:23.621085] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.383 null0 00:32:54.383 [2024-07-13 07:20:23.742295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.383 [2024-07-13 07:20:23.766487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1663951 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1663951 /var/tmp/bperf.sock 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1663951 ']' 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:54.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:54.383 07:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.383 [2024-07-13 07:20:23.815472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:54.383 [2024-07-13 07:20:23.815546] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663951 ] 00:32:54.639 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.639 [2024-07-13 07:20:23.846699] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:54.639 [2024-07-13 07:20:23.879278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.639 [2024-07-13 07:20:23.976774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.639 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:54.639 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:54.639 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:54.639 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:55.227 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:55.227 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.227 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:55.227 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.227 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:55.227 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:55.493 nvme0n1 00:32:55.493 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:55.493 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.493 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:55.493 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.493 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:55.493 07:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:55.752 Running I/O for 2 seconds... 00:32:55.752 [2024-07-13 07:20:24.986041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:24.986090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:24.986123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.001718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.001764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.001796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.019372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.019419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.019451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.032005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.032035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.032067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.048449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.048495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.048526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.061467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.061504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.061523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.078706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.078743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.078763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.095075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.095115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.095144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.107472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.107509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.107528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.124632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.124668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.124687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.139238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.139291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.139323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.151783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.151819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.151838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.166140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.166170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.166201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.183316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.183360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.183384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.752 [2024-07-13 07:20:25.195311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:55.752 [2024-07-13 07:20:25.195347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.752 [2024-07-13 07:20:25.195366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.010 [2024-07-13 07:20:25.212123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.010 [2024-07-13 07:20:25.212164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.010 [2024-07-13 07:20:25.212207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.010 [2024-07-13 07:20:25.225135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.010 [2024-07-13 07:20:25.225181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.010 [2024-07-13 07:20:25.225201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.010 [2024-07-13 07:20:25.241515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.010 [2024-07-13 07:20:25.241560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.010 [2024-07-13 07:20:25.241591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.010 [2024-07-13 07:20:25.253481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.010 [2024-07-13 07:20:25.253524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.010 [2024-07-13 07:20:25.253547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.010 [2024-07-13 07:20:25.269305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.010 [2024-07-13 07:20:25.269334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.010 [2024-07-13 07:20:25.269369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.010 [2024-07-13 07:20:25.284251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.010 [2024-07-13 07:20:25.284296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.010 [2024-07-13 07:20:25.284327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.010 [2024-07-13 07:20:25.296895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.010 [2024-07-13 07:20:25.296952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.010 [2024-07-13 07:20:25.296969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.010 [2024-07-13 07:20:25.309777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.010 [2024-07-13 07:20:25.309813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.010 [2024-07-13 07:20:25.309832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.011 [2024-07-13 07:20:25.323423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.011 [2024-07-13 07:20:25.323461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-07-13 07:20:25.323480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.011 [2024-07-13 07:20:25.337469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.011 [2024-07-13 07:20:25.337515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-07-13 07:20:25.337542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.011 [2024-07-13 07:20:25.351521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.011 [2024-07-13 07:20:25.351565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-07-13 07:20:25.351596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.011 [2024-07-13 07:20:25.365016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.011 [2024-07-13 07:20:25.365056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-07-13 07:20:25.365084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.011 [2024-07-13 07:20:25.379136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.011 [2024-07-13 07:20:25.379169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-07-13 07:20:25.379218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.011 [2024-07-13 07:20:25.391352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.011 [2024-07-13 07:20:25.391389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-07-13 07:20:25.391408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.011 [2024-07-13 07:20:25.406327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.011 [2024-07-13 07:20:25.406362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-07-13 07:20:25.406382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.011 [2024-07-13 07:20:25.422561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.011 [2024-07-13 07:20:25.422605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-07-13 07:20:25.422637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.011 [2024-07-13 07:20:25.436200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.011 [2024-07-13 07:20:25.436236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-07-13 07:20:25.436255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.011 [2024-07-13 07:20:25.453609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.011 [2024-07-13 07:20:25.453653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-07-13 07:20:25.453684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.269 [2024-07-13 07:20:25.466232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.269 [2024-07-13 07:20:25.466262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.269 [2024-07-13 07:20:25.466294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.269 [2024-07-13 07:20:25.481377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.269 [2024-07-13 07:20:25.481413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.269 [2024-07-13 07:20:25.481432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.269 [2024-07-13 07:20:25.498021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.269 [2024-07-13 07:20:25.498062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.269 [2024-07-13 07:20:25.498083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.269 [2024-07-13 07:20:25.510030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.269 [2024-07-13 07:20:25.510072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.269 [2024-07-13 07:20:25.510092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.269 [2024-07-13 07:20:25.526768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.269 [2024-07-13 07:20:25.526804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.269 [2024-07-13 07:20:25.526824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.269 [2024-07-13 07:20:25.541624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.269 [2024-07-13 07:20:25.541669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.269 [2024-07-13 07:20:25.541701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.269 [2024-07-13 07:20:25.555914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.269 [2024-07-13 07:20:25.555950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.269 [2024-07-13 07:20:25.555969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.269 [2024-07-13 07:20:25.571941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.269 [2024-07-13 07:20:25.571977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.269 [2024-07-13 07:20:25.571996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.269 [2024-07-13 07:20:25.586717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.269 [2024-07-13 07:20:25.586761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.269 [2024-07-13 07:20:25.586793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.269 [2024-07-13 07:20:25.599686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.269 [2024-07-13 07:20:25.599722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.270 [2024-07-13 07:20:25.599743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.270 [2024-07-13 07:20:25.614039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.270 [2024-07-13 07:20:25.614076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.270 [2024-07-13 07:20:25.614095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.270 [2024-07-13 07:20:25.628726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.270 [2024-07-13 07:20:25.628762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.270 [2024-07-13 07:20:25.628782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.270 [2024-07-13 07:20:25.640872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.270 [2024-07-13 07:20:25.640921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.270 [2024-07-13 07:20:25.640941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.270 [2024-07-13 07:20:25.656418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.270 [2024-07-13 07:20:25.656456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.270 [2024-07-13 07:20:25.656475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.270 [2024-07-13 07:20:25.670225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.270 [2024-07-13 07:20:25.670269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.270 [2024-07-13 07:20:25.670300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.270 [2024-07-13 07:20:25.685452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.270 [2024-07-13 07:20:25.685498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.270 [2024-07-13 07:20:25.685531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.270 [2024-07-13 07:20:25.698703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.270 [2024-07-13 07:20:25.698740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.270 [2024-07-13 07:20:25.698759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.270 [2024-07-13 07:20:25.714569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.270 [2024-07-13 07:20:25.714606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.270 [2024-07-13 07:20:25.714627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.528 [2024-07-13 07:20:25.726798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.528 [2024-07-13 07:20:25.726835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.528 [2024-07-13 07:20:25.726855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.528 [2024-07-13 07:20:25.744577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.528 [2024-07-13 07:20:25.744613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.528 [2024-07-13 07:20:25.744634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.528 [2024-07-13 07:20:25.756932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.528 [2024-07-13 07:20:25.756969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.528 [2024-07-13 07:20:25.757007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.528 [2024-07-13 07:20:25.772861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.528 [2024-07-13 07:20:25.772907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.528 [2024-07-13 07:20:25.772927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.528 [2024-07-13 07:20:25.788796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.528 [2024-07-13 07:20:25.788842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.528 [2024-07-13 07:20:25.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.528 [2024-07-13 07:20:25.802316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.802353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.802373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.818205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.818251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.818284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.832002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.832039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.832059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.848111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.848158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.848179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.860610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.860647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.860668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.875952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.876000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.876035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.890290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.890336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.890366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.901706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.901743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.901762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.918946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.918983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.919003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.934159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.934197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.934217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.947444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.947491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.947524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.964537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.964574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.964594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.529 [2024-07-13 07:20:25.977188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.529 [2024-07-13 07:20:25.977225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.529 [2024-07-13 07:20:25.977245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.787 [2024-07-13 07:20:25.995143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.787 [2024-07-13 07:20:25.995187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.787 [2024-07-13 07:20:25.995219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.787 [2024-07-13 07:20:26.011451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.787 [2024-07-13 07:20:26.011499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.787 [2024-07-13 07:20:26.011539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.787 [2024-07-13 07:20:26.024908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.787 [2024-07-13 07:20:26.024946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.787 [2024-07-13 07:20:26.024966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.787 [2024-07-13 07:20:26.040131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.787 [2024-07-13 07:20:26.040176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.787 [2024-07-13 07:20:26.040199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.787 [2024-07-13 07:20:26.051768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.787 [2024-07-13 07:20:26.051807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.787 [2024-07-13 07:20:26.051827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.787 [2024-07-13 07:20:26.067741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.787 [2024-07-13 07:20:26.067778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.787 [2024-07-13 07:20:26.067798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.787 [2024-07-13 07:20:26.084762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.787 [2024-07-13 07:20:26.084809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.787 [2024-07-13 07:20:26.084841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.097145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.097182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.097203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.113801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.113838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.113858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.125745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.125782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.125803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.139960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.140009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.140029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.156746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.156794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.156828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.168918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.168955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.168974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.185548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.185586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.185607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.200665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.200712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.200744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.214628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.214666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.214686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.229744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.229781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.229801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.788 [2024-07-13 07:20:26.241617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:56.788 [2024-07-13 07:20:26.241654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.788 [2024-07-13 07:20:26.241674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.258972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.259009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.259029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.273792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.273830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.273850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.286370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.286416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.286449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.301983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.302031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.302062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.315576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.315613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.315633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.328664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.328701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.328721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.342397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.342433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.342453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.357604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.357641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.357661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.369708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.369751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.369774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.387405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.387442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.387469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.401245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.401282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.401302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.414725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.414772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.414803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.428907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.428943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.428964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.445318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.445364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.445396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.458352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.458399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.458431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.472440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.472476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.472495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.485197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.485234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.485254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.047 [2024-07-13 07:20:26.498404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.047 [2024-07-13 07:20:26.498441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.047 [2024-07-13 07:20:26.498461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.514431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.514488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.514517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.526511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.526548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.526568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.544377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.544415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.544434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.560683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.560729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.560761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.572694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.572741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.572772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.586946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.586992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.587023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.600327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.600373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.600406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.612808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.612851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.612892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.627007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.627054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.627087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.638599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.638636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.638656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.655700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.655738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.655758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.669550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.669587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.669607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.682141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.682188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.682218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.698207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.698244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.698265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.712884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.712921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.712942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.726433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.726470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.726490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.741127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.741174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.741205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.306 [2024-07-13 07:20:26.757605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.306 [2024-07-13 07:20:26.757644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.306 [2024-07-13 07:20:26.757675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.564 [2024-07-13 07:20:26.771181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.564 [2024-07-13 07:20:26.771219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.564 [2024-07-13 07:20:26.771239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.564 [2024-07-13 07:20:26.787112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.564 [2024-07-13 07:20:26.787150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.564 [2024-07-13 07:20:26.787171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.564 [2024-07-13 07:20:26.803900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.564 [2024-07-13 07:20:26.803946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.564 [2024-07-13 07:20:26.803977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.816376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.816412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.816432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.830334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.830371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.830391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.844639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.844688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.844721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.857395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.857432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.857453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.873466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.873512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.873545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.885726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.885770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.885791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.900336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.900374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.900394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.917533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.917581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.917612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.933059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.933096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.933116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.945929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.945966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.945986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 [2024-07-13 07:20:26.961688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa170d0) 00:32:57.565 [2024-07-13 07:20:26.961725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.565 [2024-07-13 07:20:26.961745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.565 00:32:57.565 Latency(us) 00:32:57.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.565 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:57.565 nvme0n1 : 2.01 17507.37 68.39 0.00 0.00 7302.26 3980.71 21165.70 00:32:57.565 =================================================================================================================== 00:32:57.565 Total : 17507.37 68.39 0.00 0.00 7302.26 3980.71 21165.70 00:32:57.565 0 00:32:57.565 07:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:57.565 07:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:57.565 07:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:57.565 07:20:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:57.565 | .driver_specific 00:32:57.565 | .nvme_error 00:32:57.565 | .status_code 00:32:57.565 | .command_transient_transport_error' 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1663951 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1663951 ']' 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1663951 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1663951 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1663951' 00:32:57.823 killing process with pid 1663951 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1663951 00:32:57.823 Received shutdown signal, test time was about 2.000000 seconds 00:32:57.823 00:32:57.823 Latency(us) 00:32:57.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.823 =================================================================================================================== 00:32:57.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:57.823 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1663951 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1664358 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1664358 /var/tmp/bperf.sock 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1664358 ']' 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:58.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:58.082 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:58.340 [2024-07-13 07:20:27.562130] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:58.340 [2024-07-13 07:20:27.562222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664358 ] 00:32:58.340 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:58.340 Zero copy mechanism will not be used. 00:32:58.340 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.340 [2024-07-13 07:20:27.594285] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:58.340 [2024-07-13 07:20:27.626346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.340 [2024-07-13 07:20:27.722231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.598 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:58.598 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:58.598 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:58.598 07:20:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:58.856 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:58.856 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.856 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:58.856 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.856 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.856 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:59.424 nvme0n1 00:32:59.424 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:59.424 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.424 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:59.424 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.424 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:59.424 07:20:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:59.424 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:59.424 Zero copy mechanism will not be used. 00:32:59.424 Running I/O for 2 seconds... 00:32:59.424 [2024-07-13 07:20:28.756059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.756132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-13 07:20:28.756154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.424 [2024-07-13 07:20:28.765392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.765429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-13 07:20:28.765449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.424 [2024-07-13 07:20:28.774754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.774790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-13 07:20:28.774810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.424 [2024-07-13 07:20:28.784077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.784121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-13 07:20:28.784141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-13 07:20:28.793448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.793483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-13 07:20:28.793502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.424 [2024-07-13 07:20:28.802812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.802847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-13 07:20:28.802873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.424 [2024-07-13 07:20:28.812166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.812200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-13 07:20:28.812220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.424 [2024-07-13 07:20:28.821411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.821444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-13 07:20:28.821463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-13 07:20:28.830689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.830724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-13 07:20:28.830743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.424 [2024-07-13 07:20:28.840049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.840084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-13 07:20:28.840103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.424 [2024-07-13 07:20:28.849341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.424 [2024-07-13 07:20:28.849375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.425 [2024-07-13 07:20:28.849394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.425 [2024-07-13 07:20:28.858580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.425 [2024-07-13 07:20:28.858613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.425 [2024-07-13 07:20:28.858632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.425 [2024-07-13 07:20:28.867756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.425 [2024-07-13 07:20:28.867790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.425 [2024-07-13 07:20:28.867808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.425 [2024-07-13 07:20:28.877187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.425 [2024-07-13 07:20:28.877221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.425 [2024-07-13 07:20:28.877240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.683 [2024-07-13 07:20:28.886499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.683 [2024-07-13 07:20:28.886534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.683 [2024-07-13 07:20:28.886553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.683 [2024-07-13 07:20:28.896268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.896304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.896324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:28.905785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.905821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.905840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:28.915315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.915351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.915371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:28.924739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.924773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.924792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:28.934094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.934129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.934148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:28.943503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.943545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.943565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:28.952843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.952885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.952906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:28.962295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.962331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.962350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:28.971669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.971704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.971723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:28.981517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.981552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.981571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:28.991153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:28.991189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:28.991209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.000791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.000825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.000845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.010372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.010407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.010426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.019685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.019717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.019736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.028902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.028936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.028955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.038157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.038191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.038210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.047533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.047566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.047586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.056887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.056921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.056940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.066178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.066212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.066231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.075493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.075527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.075546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.084708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.084741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.084759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.093943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.093976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.093995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.103161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.103194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.103219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.112504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.112536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.112555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.121716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.121749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.121768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.684 [2024-07-13 07:20:29.130968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.684 [2024-07-13 07:20:29.131001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.684 [2024-07-13 07:20:29.131020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.140272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.140306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.140324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.149583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.149616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.149635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.158773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.158806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.158824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.168188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.168221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.168240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.177547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.177580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.177599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.186731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.186771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.186791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.195940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.195973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.195991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.205159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.205191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.205210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.214368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.214401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.214420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.223649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.223682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.223700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.233067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.233099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.233118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.242254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.242287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.242305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.251429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.251462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.251481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.260625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.260657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.260675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.269837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.943 [2024-07-13 07:20:29.269879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.943 [2024-07-13 07:20:29.269900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.943 [2024-07-13 07:20:29.279065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.279099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.279117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.288254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.288286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.288306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.297465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.297498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.297516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.306699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.306731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.306750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.315828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.315860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.315891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.325052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.325085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.325103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.334325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.334359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.334379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.343608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.343647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.343667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.353046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.353080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.353099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.362473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.362506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.362524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.371851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.371897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.371918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.381002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.381034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.381053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.944 [2024-07-13 07:20:29.390481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:32:59.944 [2024-07-13 07:20:29.390513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.944 [2024-07-13 07:20:29.390532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.399732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.399765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.399783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.409028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.409061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.409080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.418139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.418173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.418193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.427363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.427397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.427415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.436745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.436778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.436797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.446006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.446039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.446058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.455228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.455260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.455279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.464427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.464460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.464480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.473638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.473671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.473690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.482832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.482875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.482896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.492012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.492045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.492063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.501256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.501290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.501315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.510475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.510508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.510527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.519673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.519706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.519725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.528913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.528945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.528965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.538135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.538169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.538188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.547311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.547344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.547363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.556484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.556517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.556536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.565966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.565999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.566018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.203 [2024-07-13 07:20:29.575210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.203 [2024-07-13 07:20:29.575242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.203 [2024-07-13 07:20:29.575262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.204 [2024-07-13 07:20:29.584506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.204 [2024-07-13 07:20:29.584544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.204 [2024-07-13 07:20:29.584563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.204 [2024-07-13 07:20:29.593786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.204 [2024-07-13 07:20:29.593819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.204 [2024-07-13 07:20:29.593838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.204 [2024-07-13 07:20:29.603024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.204 [2024-07-13 07:20:29.603058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.204 [2024-07-13 07:20:29.603077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.204 [2024-07-13 07:20:29.612178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.204 [2024-07-13 07:20:29.612211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.204 [2024-07-13 07:20:29.612230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.204 [2024-07-13 07:20:29.621325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.204 [2024-07-13 07:20:29.621358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.204 [2024-07-13 07:20:29.621377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.204 [2024-07-13 07:20:29.630625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.204 [2024-07-13 07:20:29.630657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.204 [2024-07-13 07:20:29.630677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.204 [2024-07-13 07:20:29.639837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.204 [2024-07-13 07:20:29.639878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.204 [2024-07-13 07:20:29.639899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.204 [2024-07-13 07:20:29.649063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.204 [2024-07-13 07:20:29.649096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.204 [2024-07-13 07:20:29.649115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.658205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.658239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.658263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.667522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.667556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.667575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.676825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.676857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.676884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.686446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.686480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.686499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.696260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.696294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.696314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.706033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.706068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.706087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.715732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.715767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.715786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.724968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.725002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.725021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.734171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.734205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.734224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.743410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.743449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.743469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.752607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.752640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.752659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.761786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.761819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.761838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.770999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.771033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.771052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.780276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.780309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.780328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.789687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.789720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.789738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.798997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.799030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.799049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.808152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.808185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.808203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.817432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.817465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.817483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.826613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.826646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.826665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.835791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.835825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.835843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.845047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.463 [2024-07-13 07:20:29.845080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.463 [2024-07-13 07:20:29.845099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.463 [2024-07-13 07:20:29.854317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.464 [2024-07-13 07:20:29.854350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.464 [2024-07-13 07:20:29.854368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.464 [2024-07-13 07:20:29.863515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.464 [2024-07-13 07:20:29.863547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.464 [2024-07-13 07:20:29.863566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.464 [2024-07-13 07:20:29.872735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.464 [2024-07-13 07:20:29.872768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.464 [2024-07-13 07:20:29.872786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.464 [2024-07-13 07:20:29.881947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.464 [2024-07-13 07:20:29.881980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.464 [2024-07-13 07:20:29.881998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.464 [2024-07-13 07:20:29.891209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.464 [2024-07-13 07:20:29.891242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.464 [2024-07-13 07:20:29.891261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.464 [2024-07-13 07:20:29.901792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.464 [2024-07-13 07:20:29.901828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.464 [2024-07-13 07:20:29.901857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.464 [2024-07-13 07:20:29.913478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.464 [2024-07-13 07:20:29.913514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.464 [2024-07-13 07:20:29.913534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:29.924139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:29.924175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:29.924196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:29.935308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:29.935345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:29.935365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:29.947288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:29.947328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:29.947348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:29.958154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:29.958188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:29.958208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:29.969588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:29.969624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:29.969644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:29.981845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:29.981890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:29.981911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:29.992582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:29.992617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:29.992637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.002174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.002235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.002268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.012023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.012062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.012082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.021840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.021890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.021920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.031791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.031834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.031855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.041143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.041179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.041198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.050473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.050506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.050525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.060346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.060401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.060423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.069852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.069901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.069921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.079454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.079490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.079520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.088886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.088920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.088938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.098211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.098244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.098263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.107505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.107538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.107557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.116759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.116792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.116811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.126124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.126157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.126176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.135420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.135452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.723 [2024-07-13 07:20:30.135470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.723 [2024-07-13 07:20:30.144706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.723 [2024-07-13 07:20:30.144738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.724 [2024-07-13 07:20:30.144756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.724 [2024-07-13 07:20:30.154001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.724 [2024-07-13 07:20:30.154033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.724 [2024-07-13 07:20:30.154051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.724 [2024-07-13 07:20:30.163444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.724 [2024-07-13 07:20:30.163484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.724 [2024-07-13 07:20:30.163503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.724 [2024-07-13 07:20:30.172900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.724 [2024-07-13 07:20:30.172933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.724 [2024-07-13 07:20:30.172951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.983 [2024-07-13 07:20:30.182266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.983 [2024-07-13 07:20:30.182301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.983 [2024-07-13 07:20:30.182319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.983 [2024-07-13 07:20:30.191697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.983 [2024-07-13 07:20:30.191731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.983 [2024-07-13 07:20:30.191750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.983 [2024-07-13 07:20:30.201078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.983 [2024-07-13 07:20:30.201111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.983 [2024-07-13 07:20:30.201130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.983 [2024-07-13 07:20:30.210446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.983 [2024-07-13 07:20:30.210480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.983 [2024-07-13 07:20:30.210499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.983 [2024-07-13 07:20:30.219815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.983 [2024-07-13 07:20:30.219849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.983 [2024-07-13 07:20:30.219876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.983 [2024-07-13 07:20:30.229207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.983 [2024-07-13 07:20:30.229240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.983 [2024-07-13 07:20:30.229258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.983 [2024-07-13 07:20:30.238524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.983 [2024-07-13 07:20:30.238558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.983 [2024-07-13 07:20:30.238576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.983 [2024-07-13 07:20:30.248079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.983 [2024-07-13 07:20:30.248113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.983 [2024-07-13 07:20:30.248132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.257332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.257365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.257383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.266668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.266701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.266720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.276052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.276086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.276104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.285363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.285396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.285415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.294659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.294691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.294709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.303922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.303955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.303974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.313228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.313260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.313278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.322595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.322628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.322655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.331895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.331927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.331946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.341231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.341265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.341284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.350546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.350579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.350597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.359931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.359963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.359981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.369375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.369408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.369427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.378840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.378881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.378901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.388299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.388333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.388352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.397582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.397615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.397633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.406878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.406916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.406936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.416245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.416278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.416297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.425538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.425571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.425590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.984 [2024-07-13 07:20:30.434812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:00.984 [2024-07-13 07:20:30.434845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.984 [2024-07-13 07:20:30.434863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.444224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.444257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.444275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.453551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.453584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.453602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.462913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.462946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.462964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.472253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.472286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.472304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.481626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.481659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.481677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.490856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.490899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.490919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.500223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.500256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.500275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.509516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.509549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.509567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.518853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.518895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.518923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.528211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.528245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.528263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.537533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.537565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.537584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.244 [2024-07-13 07:20:30.546807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.244 [2024-07-13 07:20:30.546839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.244 [2024-07-13 07:20:30.546858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.556277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.556310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.556329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.565683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.565716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.565743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.575004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.575037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.575055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.584280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.584312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.584329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.593551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.593582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.593599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.602793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.602824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.602842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.612041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.612072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.612090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.621509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.621542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.621559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.631077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.631109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.631127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.640799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.640830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.640848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.650220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.650261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.650281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.659517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.659549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.659567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.668779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.668811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.668829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.678110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.678142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.678166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.687361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.687394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.687413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.245 [2024-07-13 07:20:30.696631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.245 [2024-07-13 07:20:30.696664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.245 [2024-07-13 07:20:30.696683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.504 [2024-07-13 07:20:30.706067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.504 [2024-07-13 07:20:30.706100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.504 [2024-07-13 07:20:30.706119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.504 [2024-07-13 07:20:30.715374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.504 [2024-07-13 07:20:30.715406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.504 [2024-07-13 07:20:30.715424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.504 [2024-07-13 07:20:30.724744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.504 [2024-07-13 07:20:30.724776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.504 [2024-07-13 07:20:30.724801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.504 [2024-07-13 07:20:30.734230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.504 [2024-07-13 07:20:30.734262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.504 [2024-07-13 07:20:30.734281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.504 [2024-07-13 07:20:30.743473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22f3f00) 00:33:01.504 [2024-07-13 07:20:30.743504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.504 [2024-07-13 07:20:30.743523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.504 00:33:01.504 Latency(us) 00:33:01.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.504 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:01.504 nvme0n1 : 2.00 3284.04 410.50 0.00 0.00 4866.47 1535.24 14272.28 00:33:01.504 =================================================================================================================== 00:33:01.504 Total : 3284.04 410.50 0.00 0.00 4866.47 1535.24 14272.28 00:33:01.504 0 00:33:01.504 07:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:01.504 07:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:01.504 07:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:01.504 | .driver_specific 00:33:01.504 | .nvme_error 00:33:01.504 | .status_code 00:33:01.504 | .command_transient_transport_error' 00:33:01.504 07:20:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1664358 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1664358 ']' 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1664358 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1664358 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1664358' 00:33:01.763 killing process with pid 1664358 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1664358 00:33:01.763 Received shutdown signal, test time was about 2.000000 seconds 00:33:01.763 00:33:01.763 Latency(us) 00:33:01.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.763 =================================================================================================================== 00:33:01.763 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:01.763 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1664358 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1664771 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1664771 /var/tmp/bperf.sock 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1664771 ']' 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:02.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:02.022 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:02.022 [2024-07-13 07:20:31.323312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:02.022 [2024-07-13 07:20:31.323406] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664771 ] 00:33:02.022 EAL: No free 2048 kB hugepages reported on node 1 00:33:02.022 [2024-07-13 07:20:31.356088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:02.022 [2024-07-13 07:20:31.388762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.280 [2024-07-13 07:20:31.484920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.280 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:02.280 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:02.280 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:02.280 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:02.540 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:02.540 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.540 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:02.540 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.540 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:02.540 07:20:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:02.799 nvme0n1 00:33:02.799 07:20:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:02.799 07:20:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.799 07:20:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:02.799 07:20:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.799 07:20:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:02.799 07:20:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:03.057 Running I/O for 2 seconds... 00:33:03.057 [2024-07-13 07:20:32.345584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ee5c8 00:33:03.057 [2024-07-13 07:20:32.346642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.346684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.357649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fac10 00:33:03.057 [2024-07-13 07:20:32.358675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.358709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.371944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fb480 00:33:03.057 [2024-07-13 07:20:32.373166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.373195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.385055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ef6a8 00:33:03.057 [2024-07-13 07:20:32.386427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.386460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.398476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e9e10 00:33:03.057 [2024-07-13 07:20:32.400050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.400077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.409503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f2510 00:33:03.057 [2024-07-13 07:20:32.410251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.410283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.422838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e23b8 00:33:03.057 [2024-07-13 07:20:32.423711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.423744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.437418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f0bc0 00:33:03.057 [2024-07-13 07:20:32.439309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.439341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.450697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190eb328 00:33:03.057 [2024-07-13 07:20:32.452740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.452771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.459639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e01f8 00:33:03.057 [2024-07-13 07:20:32.460514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.460560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.472545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f7970 00:33:03.057 [2024-07-13 07:20:32.473431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.473463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.485684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e5220 00:33:03.057 [2024-07-13 07:20:32.486378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.486410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.057 [2024-07-13 07:20:32.498377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fac10 00:33:03.057 [2024-07-13 07:20:32.499249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.057 [2024-07-13 07:20:32.499281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.316 [2024-07-13 07:20:32.513149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e73e0 00:33:03.316 [2024-07-13 07:20:32.515143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.316 [2024-07-13 07:20:32.515186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:03.316 [2024-07-13 07:20:32.525087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f6020 00:33:03.316 [2024-07-13 07:20:32.526455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.316 [2024-07-13 07:20:32.526487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.316 [2024-07-13 07:20:32.539135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f7100 00:33:03.316 [2024-07-13 07:20:32.541211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.316 [2024-07-13 07:20:32.541253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:03.316 [2024-07-13 07:20:32.548135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e9168 00:33:03.316 [2024-07-13 07:20:32.548997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.316 [2024-07-13 07:20:32.549028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:03.316 [2024-07-13 07:20:32.561421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e6b70 00:33:03.316 [2024-07-13 07:20:32.562459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.316 [2024-07-13 07:20:32.562490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:03.316 [2024-07-13 07:20:32.574681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ec840 00:33:03.316 [2024-07-13 07:20:32.575887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.316 [2024-07-13 07:20:32.575917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.316 [2024-07-13 07:20:32.587949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e6738 00:33:03.317 [2024-07-13 07:20:32.589326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.589357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.600734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fa7d8 00:33:03.317 [2024-07-13 07:20:32.602134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.602172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.614933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190dece0 00:33:03.317 [2024-07-13 07:20:32.616990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.617021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.623897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190eee38 00:33:03.317 [2024-07-13 07:20:32.624739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.624769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.638246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f7da8 00:33:03.317 [2024-07-13 07:20:32.639766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.639797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.650044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e6b70 00:33:03.317 [2024-07-13 07:20:32.651043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.651090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.662806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f3e60 00:33:03.317 [2024-07-13 07:20:32.663643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.663674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.676093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fef90 00:33:03.317 [2024-07-13 07:20:32.677096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.677127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.690547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f92c0 00:33:03.317 [2024-07-13 07:20:32.692586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.692617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.699523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e99d8 00:33:03.317 [2024-07-13 07:20:32.700361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.700391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.713829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f0350 00:33:03.317 [2024-07-13 07:20:32.715836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.715874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.725498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f2d80 00:33:03.317 [2024-07-13 07:20:32.726521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.726552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.738499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e9168 00:33:03.317 [2024-07-13 07:20:32.739681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.739711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.750451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fda78 00:33:03.317 [2024-07-13 07:20:32.751625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.751656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.317 [2024-07-13 07:20:32.764509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e5a90 00:33:03.317 [2024-07-13 07:20:32.765890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.317 [2024-07-13 07:20:32.765922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.777810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ed0b0 00:33:03.574 [2024-07-13 07:20:32.779348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.779380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.789738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e27f0 00:33:03.574 [2024-07-13 07:20:32.791263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.791293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.801549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fe720 00:33:03.574 [2024-07-13 07:20:32.802564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.802595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.814338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f9f68 00:33:03.574 [2024-07-13 07:20:32.815166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.815197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.828774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e4578 00:33:03.574 [2024-07-13 07:20:32.830649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.830680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.840594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fcdd0 00:33:03.574 [2024-07-13 07:20:32.841941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.841972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.852086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f35f0 00:33:03.574 [2024-07-13 07:20:32.853881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.853912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.863743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e4140 00:33:03.574 [2024-07-13 07:20:32.864590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.864620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.876749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f31b8 00:33:03.574 [2024-07-13 07:20:32.877762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.877793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.888707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e8d30 00:33:03.574 [2024-07-13 07:20:32.889714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.889745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.901929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ebfd0 00:33:03.574 [2024-07-13 07:20:32.903105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.903136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.915140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ed4e8 00:33:03.574 [2024-07-13 07:20:32.916504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.916535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.929209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f6458 00:33:03.574 [2024-07-13 07:20:32.930746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.930777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.942234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190eaab8 00:33:03.574 [2024-07-13 07:20:32.943939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.943970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.952594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ec840 00:33:03.574 [2024-07-13 07:20:32.953599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.953631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.965623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ee5c8 00:33:03.574 [2024-07-13 07:20:32.966451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.966482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.978824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190df988 00:33:03.574 [2024-07-13 07:20:32.979834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.979887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:32.991659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e1710 00:33:03.574 [2024-07-13 07:20:32.993005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:32.993036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:33.004653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f9f68 00:33:03.574 [2024-07-13 07:20:33.006199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:33.006230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.574 [2024-07-13 07:20:33.016612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e23b8 00:33:03.574 [2024-07-13 07:20:33.018132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.574 [2024-07-13 07:20:33.018162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.831 [2024-07-13 07:20:33.030010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190de470 00:33:03.831 [2024-07-13 07:20:33.031784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.831 [2024-07-13 07:20:33.031815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.831 [2024-07-13 07:20:33.043333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fda78 00:33:03.831 [2024-07-13 07:20:33.045197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.831 [2024-07-13 07:20:33.045228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.831 [2024-07-13 07:20:33.055163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fb8b8 00:33:03.831 [2024-07-13 07:20:33.056503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.831 [2024-07-13 07:20:33.056536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:03.831 [2024-07-13 07:20:33.067941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f8e88 00:33:03.831 [2024-07-13 07:20:33.069107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.831 [2024-07-13 07:20:33.069140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.831 [2024-07-13 07:20:33.080752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ee190 00:33:03.831 [2024-07-13 07:20:33.082268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.082300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.093726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fa7d8 00:33:03.832 [2024-07-13 07:20:33.095404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.095436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.104480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190dece0 00:33:03.832 [2024-07-13 07:20:33.105289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.105321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.118908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f4f40 00:33:03.832 [2024-07-13 07:20:33.120749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.120780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.130694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f8e88 00:33:03.832 [2024-07-13 07:20:33.132029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.132059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.143458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f35f0 00:33:03.832 [2024-07-13 07:20:33.144608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.144639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.156270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e84c0 00:33:03.832 [2024-07-13 07:20:33.157766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.157796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.169247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f0bc0 00:33:03.832 [2024-07-13 07:20:33.170918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.181213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fb048 00:33:03.832 [2024-07-13 07:20:33.182887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.182917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.193016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f2d80 00:33:03.832 [2024-07-13 07:20:33.194159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.194189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.204530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e8d30 00:33:03.832 [2024-07-13 07:20:33.205672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.205703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.218591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ed920 00:33:03.832 [2024-07-13 07:20:33.219915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.219946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.231603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f2d80 00:33:03.832 [2024-07-13 07:20:33.233108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.233139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.243542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ecc78 00:33:03.832 [2024-07-13 07:20:33.245026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.245056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.255345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190de470 00:33:03.832 [2024-07-13 07:20:33.256303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.256333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.268104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ed4e8 00:33:03.832 [2024-07-13 07:20:33.268887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.268918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:03.832 [2024-07-13 07:20:33.281283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f2d80 00:33:03.832 [2024-07-13 07:20:33.282240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.832 [2024-07-13 07:20:33.282270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:04.089 [2024-07-13 07:20:33.294823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e27f0 00:33:04.089 [2024-07-13 07:20:33.295951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.089 [2024-07-13 07:20:33.295983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:04.089 [2024-07-13 07:20:33.309247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e4de8 00:33:04.089 [2024-07-13 07:20:33.311425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.089 [2024-07-13 07:20:33.311461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.318239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f6890 00:33:04.090 [2024-07-13 07:20:33.319214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.319244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.330181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ec840 00:33:04.090 [2024-07-13 07:20:33.331138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.331168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.344236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fdeb0 00:33:04.090 [2024-07-13 07:20:33.345389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.345420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.356806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e1710 00:33:04.090 [2024-07-13 07:20:33.357967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.357997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.370964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fbcf0 00:33:04.090 [2024-07-13 07:20:33.372791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.372821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.382781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190eb760 00:33:04.090 [2024-07-13 07:20:33.384083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.384115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.395544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f2d80 00:33:04.090 [2024-07-13 07:20:33.396674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.396705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.407448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fef90 00:33:04.090 [2024-07-13 07:20:33.409370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.409401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.419263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f7970 00:33:04.090 [2024-07-13 07:20:33.420231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.420262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.432280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e3060 00:33:04.090 [2024-07-13 07:20:33.433412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.433443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.445078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e1f80 00:33:04.090 [2024-07-13 07:20:33.446215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.446246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.459230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f9b30 00:33:04.090 [2024-07-13 07:20:33.461038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.461069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.471018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e8d30 00:33:04.090 [2024-07-13 07:20:33.472294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.472326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.482520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e73e0 00:33:04.090 [2024-07-13 07:20:33.483789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.483820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.496591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f4f40 00:33:04.090 [2024-07-13 07:20:33.498059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.498091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.509646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e8d30 00:33:04.090 [2024-07-13 07:20:33.511285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.511316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.521603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f35f0 00:33:04.090 [2024-07-13 07:20:33.523225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.523255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:04.090 [2024-07-13 07:20:33.533385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e84c0 00:33:04.090 [2024-07-13 07:20:33.534488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.090 [2024-07-13 07:20:33.534520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:04.348 [2024-07-13 07:20:33.546316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e27f0 00:33:04.348 [2024-07-13 07:20:33.547282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.348 [2024-07-13 07:20:33.547313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:04.348 [2024-07-13 07:20:33.560795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e5a90 00:33:04.348 [2024-07-13 07:20:33.562768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.348 [2024-07-13 07:20:33.562799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:04.348 [2024-07-13 07:20:33.572593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f3e60 00:33:04.348 [2024-07-13 07:20:33.574041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.348 [2024-07-13 07:20:33.574073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:04.348 [2024-07-13 07:20:33.584078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f92c0 00:33:04.348 [2024-07-13 07:20:33.585989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.348 [2024-07-13 07:20:33.586020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.348 [2024-07-13 07:20:33.595764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ec408 00:33:04.348 [2024-07-13 07:20:33.596712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.348 [2024-07-13 07:20:33.596743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:04.348 [2024-07-13 07:20:33.608793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e3060 00:33:04.348 [2024-07-13 07:20:33.609909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.348 [2024-07-13 07:20:33.609940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:04.348 [2024-07-13 07:20:33.620770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f0350 00:33:04.348 [2024-07-13 07:20:33.621877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.348 [2024-07-13 07:20:33.621907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:04.348 [2024-07-13 07:20:33.634831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e7c50 00:33:04.349 [2024-07-13 07:20:33.636129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.636166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.647856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190df550 00:33:04.349 [2024-07-13 07:20:33.649326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.649358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.660659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e99d8 00:33:04.349 [2024-07-13 07:20:33.662132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.662163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.673264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fd208 00:33:04.349 [2024-07-13 07:20:33.674722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.674753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.686269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190df118 00:33:04.349 [2024-07-13 07:20:33.687913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.687955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.696699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e01f8 00:33:04.349 [2024-07-13 07:20:33.697630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.697663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.709687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f31b8 00:33:04.349 [2024-07-13 07:20:33.710762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.710794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.722943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e88f8 00:33:04.349 [2024-07-13 07:20:33.724234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.724266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.734891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e0a68 00:33:04.349 [2024-07-13 07:20:33.736168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.736200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.748958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f35f0 00:33:04.349 [2024-07-13 07:20:33.750410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.750441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.761993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190edd58 00:33:04.349 [2024-07-13 07:20:33.763608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.763639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.772724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ddc00 00:33:04.349 [2024-07-13 07:20:33.773473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.773505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.787154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ebb98 00:33:04.349 [2024-07-13 07:20:33.788936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.788967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:04.349 [2024-07-13 07:20:33.798983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e99d8 00:33:04.349 [2024-07-13 07:20:33.800259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.349 [2024-07-13 07:20:33.800290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.810729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ebb98 00:33:04.608 [2024-07-13 07:20:33.811992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.812024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.823936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190eb760 00:33:04.608 [2024-07-13 07:20:33.825358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.825390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.835742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f7da8 00:33:04.608 [2024-07-13 07:20:33.836650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.836681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.848529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fef90 00:33:04.608 [2024-07-13 07:20:33.849269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.849300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.862973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e1b48 00:33:04.608 [2024-07-13 07:20:33.864742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.864773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.876184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ed4e8 00:33:04.608 [2024-07-13 07:20:33.878136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.878167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.887982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fe720 00:33:04.608 [2024-07-13 07:20:33.889399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.889430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.899453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e0630 00:33:04.608 [2024-07-13 07:20:33.901332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.901364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.911120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ecc78 00:33:04.608 [2024-07-13 07:20:33.912040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.912072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.924127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e5a90 00:33:04.608 [2024-07-13 07:20:33.925216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.925247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.936069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f4f40 00:33:04.608 [2024-07-13 07:20:33.937145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.937176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.950147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fd640 00:33:04.608 [2024-07-13 07:20:33.951417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.951448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.962726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190de038 00:33:04.608 [2024-07-13 07:20:33.964011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.964042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.975852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190eee38 00:33:04.608 [2024-07-13 07:20:33.977298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.977330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.989102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ee5c8 00:33:04.608 [2024-07-13 07:20:33.990711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:33.990742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:33.999849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f4b08 00:33:04.608 [2024-07-13 07:20:34.000589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.608 [2024-07-13 07:20:34.000620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:04.608 [2024-07-13 07:20:34.014280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190eaef0 00:33:04.609 [2024-07-13 07:20:34.016060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.609 [2024-07-13 07:20:34.016091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:04.609 [2024-07-13 07:20:34.026090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f20d8 00:33:04.609 [2024-07-13 07:20:34.027334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.609 [2024-07-13 07:20:34.027365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:04.609 [2024-07-13 07:20:34.038881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e8d30 00:33:04.609 [2024-07-13 07:20:34.039955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.609 [2024-07-13 07:20:34.039987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:04.609 [2024-07-13 07:20:34.051715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f9b30 00:33:04.609 [2024-07-13 07:20:34.053142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.609 [2024-07-13 07:20:34.053174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:04.866 [2024-07-13 07:20:34.064901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e4578 00:33:04.866 [2024-07-13 07:20:34.066557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.866 [2024-07-13 07:20:34.066589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:04.866 [2024-07-13 07:20:34.075351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f1430 00:33:04.866 [2024-07-13 07:20:34.076246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.866 [2024-07-13 07:20:34.076283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:04.866 [2024-07-13 07:20:34.088330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f3e60 00:33:04.867 [2024-07-13 07:20:34.089396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.089427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.102699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190df550 00:33:04.867 [2024-07-13 07:20:34.104445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.104476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.114513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f8e88 00:33:04.867 [2024-07-13 07:20:34.115733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.115765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.126042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e5ec8 00:33:04.867 [2024-07-13 07:20:34.127248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.127279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.139252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190df988 00:33:04.867 [2024-07-13 07:20:34.140640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.140671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.151069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ecc78 00:33:04.867 [2024-07-13 07:20:34.151920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.151952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.163838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e9e10 00:33:04.867 [2024-07-13 07:20:34.164530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.164561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.178254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f6890 00:33:04.867 [2024-07-13 07:20:34.179987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.180018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.191473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e49b0 00:33:04.867 [2024-07-13 07:20:34.193388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.193420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.204703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e5a90 00:33:04.867 [2024-07-13 07:20:34.206776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.206808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.213677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ee190 00:33:04.867 [2024-07-13 07:20:34.214548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.214579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.225609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f57b0 00:33:04.867 [2024-07-13 07:20:34.226468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.226498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.239655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190e49b0 00:33:04.867 [2024-07-13 07:20:34.240706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.240738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.252678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f8a50 00:33:04.867 [2024-07-13 07:20:34.253875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.253906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.264627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f6458 00:33:04.867 [2024-07-13 07:20:34.265833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.265870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.278675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190f20d8 00:33:04.867 [2024-07-13 07:20:34.280078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.280110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.291689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190ec408 00:33:04.867 [2024-07-13 07:20:34.293263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.293295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.303623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fb048 00:33:04.867 [2024-07-13 07:20:34.305190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.305222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:04.867 [2024-07-13 07:20:34.316824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190fdeb0 00:33:04.867 [2024-07-13 07:20:34.318615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.867 [2024-07-13 07:20:34.318646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.125 [2024-07-13 07:20:34.328892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10629f0) with pdu=0x2000190de038 00:33:05.126 [2024-07-13 07:20:34.330099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.126 [2024-07-13 07:20:34.330131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.126 00:33:05.126 Latency(us) 00:33:05.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.126 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:05.126 nvme0n1 : 2.00 20220.62 78.99 0.00 0.00 6323.26 2669.99 15631.55 00:33:05.126 =================================================================================================================== 00:33:05.126 Total : 20220.62 78.99 0.00 0.00 6323.26 2669.99 15631.55 00:33:05.126 0 00:33:05.126 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:05.126 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:05.126 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:05.126 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:05.126 | .driver_specific 00:33:05.126 | .nvme_error 00:33:05.126 | .status_code 00:33:05.126 | .command_transient_transport_error' 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1664771 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1664771 ']' 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1664771 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1664771 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1664771' 00:33:05.384 killing process with pid 1664771 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1664771 00:33:05.384 Received shutdown signal, test time was about 2.000000 seconds 00:33:05.384 00:33:05.384 Latency(us) 00:33:05.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.384 =================================================================================================================== 00:33:05.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:05.384 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1664771 00:33:05.644 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:05.644 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1665290 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1665290 /var/tmp/bperf.sock 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1665290 ']' 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:05.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:05.645 07:20:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.645 [2024-07-13 07:20:34.897535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:05.645 [2024-07-13 07:20:34.897632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665290 ] 00:33:05.645 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:05.645 Zero copy mechanism will not be used. 00:33:05.645 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.645 [2024-07-13 07:20:34.929661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:05.645 [2024-07-13 07:20:34.961368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.645 [2024-07-13 07:20:35.056113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.908 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:05.908 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:05.908 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:05.908 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:06.165 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:06.165 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.165 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:06.165 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.165 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:06.165 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:06.422 nvme0n1 00:33:06.422 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:06.422 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.422 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:06.422 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.422 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:06.422 07:20:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:06.680 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:06.680 Zero copy mechanism will not be used. 00:33:06.680 Running I/O for 2 seconds... 00:33:06.680 [2024-07-13 07:20:35.907680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:35.908105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:35.908158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:35.919694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:35.920121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:35.920167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:35.931490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:35.931888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:35.931938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:35.944062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:35.944406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:35.944437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:35.956744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:35.957107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:35.957155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:35.968651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:35.969060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:35.969091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:35.980695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:35.981032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:35.981078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:35.992029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:35.992375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:35.992406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.003470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.003826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.003858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.015551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.015924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.015970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.027752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.028098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.028128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.039583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.039953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.039998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.050814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.051158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.051205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.061911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.062277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.062323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.072664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.073024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.073076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.083837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.084211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.084242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.095163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.095515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.095544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.106381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.106605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.106636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.117293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.117705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.117736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.680 [2024-07-13 07:20:36.128108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.680 [2024-07-13 07:20:36.128567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.680 [2024-07-13 07:20:36.128598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.937 [2024-07-13 07:20:36.137915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.937 [2024-07-13 07:20:36.138347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.937 [2024-07-13 07:20:36.138394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.937 [2024-07-13 07:20:36.148391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.937 [2024-07-13 07:20:36.148599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.937 [2024-07-13 07:20:36.148630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.937 [2024-07-13 07:20:36.158484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.937 [2024-07-13 07:20:36.158923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.937 [2024-07-13 07:20:36.158954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.937 [2024-07-13 07:20:36.168114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.937 [2024-07-13 07:20:36.168570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.937 [2024-07-13 07:20:36.168601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.178165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.178561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.178606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.188043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.188458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.188503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.198565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.198986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.199017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.208847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.209238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.209270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.218806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.219225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.219256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.229291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.229610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.229640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.239309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.239710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.239741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.248625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.249006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.249035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.258644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.258999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.259045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.269062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.269472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.269504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.279019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.279361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.279391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.288302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.288765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.288794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.299118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.299543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.299581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.309699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.310098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.310130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.319495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.319884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.319914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.329107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.329585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.329615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.338723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.339105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.339141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.349014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.349425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.349455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.358445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.358821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.358852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.368283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.368718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.368762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.378766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.379131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.379164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.938 [2024-07-13 07:20:36.389356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:06.938 [2024-07-13 07:20:36.389775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.938 [2024-07-13 07:20:36.389805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.399267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.399628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.399658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.409621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.410127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.410158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.419420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.419801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.419831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.429328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.429757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.429789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.439887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.440288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.440319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.450310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.450660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.450704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.460418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.460765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.460810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.471457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.471870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.471901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.481921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.482382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.482426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.492873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.493250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.493280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.196 [2024-07-13 07:20:36.502920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.196 [2024-07-13 07:20:36.503359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.196 [2024-07-13 07:20:36.503393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.197 [2024-07-13 07:20:36.513450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.197 [2024-07-13 07:20:36.513896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.197 [2024-07-13 07:20:36.513931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.197 [2024-07-13 07:20:36.523933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.197 [2024-07-13 07:20:36.524370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.197 [2024-07-13 07:20:36.524401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.197 [2024-07-13 07:20:36.536607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.197 [2024-07-13 07:20:36.537240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.197 [2024-07-13 07:20:36.537281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.197 [2024-07-13 07:20:36.552487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.197 [2024-07-13 07:20:36.552962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.197 [2024-07-13 07:20:36.552999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.197 [2024-07-13 07:20:36.567426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.197 [2024-07-13 07:20:36.567829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.197 [2024-07-13 07:20:36.567880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.197 [2024-07-13 07:20:36.581108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.197 [2024-07-13 07:20:36.581612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.197 [2024-07-13 07:20:36.581643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.197 [2024-07-13 07:20:36.596786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.197 [2024-07-13 07:20:36.597331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.197 [2024-07-13 07:20:36.597362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.197 [2024-07-13 07:20:36.613062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.197 [2024-07-13 07:20:36.613520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.197 [2024-07-13 07:20:36.613552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.197 [2024-07-13 07:20:36.628667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.197 [2024-07-13 07:20:36.629342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.197 [2024-07-13 07:20:36.629376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.197 [2024-07-13 07:20:36.644670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.197 [2024-07-13 07:20:36.645111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.197 [2024-07-13 07:20:36.645150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.659335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.659935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.659967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.674061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.674591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.674634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.689505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.690118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.690149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.705061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.705525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.705558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.720313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.720974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.721005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.735643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.736018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.736049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.750230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.750751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.750798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.766278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.766775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.766832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.781060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.781638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.781684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.796324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.796863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.796910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.811990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.812660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.812690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.827677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.828255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.828304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.843661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.844200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.844232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.455 [2024-07-13 07:20:36.858772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.455 [2024-07-13 07:20:36.859174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.455 [2024-07-13 07:20:36.859205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.456 [2024-07-13 07:20:36.873897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.456 [2024-07-13 07:20:36.874284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.456 [2024-07-13 07:20:36.874314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.456 [2024-07-13 07:20:36.889292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.456 [2024-07-13 07:20:36.889807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.456 [2024-07-13 07:20:36.889837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.456 [2024-07-13 07:20:36.904973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.456 [2024-07-13 07:20:36.905511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.456 [2024-07-13 07:20:36.905551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.713 [2024-07-13 07:20:36.920030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.713 [2024-07-13 07:20:36.920534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.713 [2024-07-13 07:20:36.920564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.713 [2024-07-13 07:20:36.935766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.713 [2024-07-13 07:20:36.936317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.713 [2024-07-13 07:20:36.936362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.713 [2024-07-13 07:20:36.951494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.713 [2024-07-13 07:20:36.951885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:36.951918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:36.966519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:36.966974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:36.967014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:36.982149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:36.982853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:36.982889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:36.997575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:36.998120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:36.998152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:37.013488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:37.013864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:37.013903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:37.029104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:37.029662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:37.029696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:37.044007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:37.044469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:37.044510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:37.059629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:37.060075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:37.060105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:37.074803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:37.075194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:37.075227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:37.090818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:37.091324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:37.091354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:37.107420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:37.107941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:37.107972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:37.123187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:37.123729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:37.123762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:37.139176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:37.139716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:37.139746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.714 [2024-07-13 07:20:37.154099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.714 [2024-07-13 07:20:37.154761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.714 [2024-07-13 07:20:37.154794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.168601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.169167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.169197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.184896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.185373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.185406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.200540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.201047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.201077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.216815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.217292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.217323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.231376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.231974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.232003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.247368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.247892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.247923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.263448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.264091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.264123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.279691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.280282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.280315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.295118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.295746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.295775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.310129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.310808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.310848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.326013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.326478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.326510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.341415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.341822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.341871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.357146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.357603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.357633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.372447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.372989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.373018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.972 [2024-07-13 07:20:37.388280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.972 [2024-07-13 07:20:37.388837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.972 [2024-07-13 07:20:37.388893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.973 [2024-07-13 07:20:37.403914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.973 [2024-07-13 07:20:37.404426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.973 [2024-07-13 07:20:37.404456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.973 [2024-07-13 07:20:37.418199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:07.973 [2024-07-13 07:20:37.418724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.973 [2024-07-13 07:20:37.418754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.230 [2024-07-13 07:20:37.433237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.230 [2024-07-13 07:20:37.433721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.230 [2024-07-13 07:20:37.433769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.230 [2024-07-13 07:20:37.448393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.230 [2024-07-13 07:20:37.448800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.230 [2024-07-13 07:20:37.448845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.230 [2024-07-13 07:20:37.462420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.230 [2024-07-13 07:20:37.462995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.230 [2024-07-13 07:20:37.463027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.230 [2024-07-13 07:20:37.477904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.230 [2024-07-13 07:20:37.478348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.230 [2024-07-13 07:20:37.478377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.230 [2024-07-13 07:20:37.493347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.230 [2024-07-13 07:20:37.493791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.493821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.508272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.508788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.508818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.524587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.525088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.525119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.539009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.539559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.539596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.554131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.554585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.554617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.570067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.570439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.570475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.585042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.585603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.585635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.601050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.601648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.601678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.616521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.616959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.616989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.630665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.631127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.631157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.647181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.647642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.647675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.662136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.662516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.662545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.231 [2024-07-13 07:20:37.677777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.231 [2024-07-13 07:20:37.678365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.231 [2024-07-13 07:20:37.678396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.692530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.693057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.693091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.706849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.707209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.707250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.722331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.722814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.722844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.737453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.737838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.737887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.752602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.753161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.753204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.767758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.768112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.768142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.783382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.783843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.783890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.798586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.799126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.799174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.814469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.815007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.815042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.830448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.830962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.830991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.845837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.846376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.846422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.860243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.860756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.860785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.874213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.874782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.874812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.489 [2024-07-13 07:20:37.890008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1062d30) with pdu=0x2000190fef90 00:33:08.489 [2024-07-13 07:20:37.890457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.489 [2024-07-13 07:20:37.890489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.489 00:33:08.489 Latency(us) 00:33:08.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.489 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:08.489 nvme0n1 : 2.01 2295.86 286.98 0.00 0.00 6950.03 4247.70 16505.36 00:33:08.489 =================================================================================================================== 00:33:08.489 Total : 2295.86 286.98 0.00 0.00 6950.03 4247.70 16505.36 00:33:08.489 0 00:33:08.489 07:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:08.489 07:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:08.489 07:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:08.489 | .driver_specific 00:33:08.489 | .nvme_error 00:33:08.489 | .status_code 00:33:08.489 | .command_transient_transport_error' 00:33:08.489 07:20:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1665290 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1665290 ']' 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1665290 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1665290 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1665290' 00:33:08.747 killing process with pid 1665290 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1665290 00:33:08.747 Received shutdown signal, test time was about 2.000000 seconds 00:33:08.747 00:33:08.747 Latency(us) 00:33:08.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.747 =================================================================================================================== 00:33:08.747 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:08.747 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1665290 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1663811 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1663811 ']' 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1663811 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1663811 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1663811' 00:33:09.005 killing process with pid 1663811 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1663811 00:33:09.005 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1663811 00:33:09.263 00:33:09.263 real 0m15.359s 00:33:09.263 user 0m30.396s 00:33:09.263 sys 0m4.236s 00:33:09.263 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:09.263 07:20:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:09.263 ************************************ 00:33:09.263 END TEST nvmf_digest_error 00:33:09.263 ************************************ 00:33:09.263 07:20:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:09.263 07:20:38 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:09.263 07:20:38 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:09.263 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:09.263 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:09.521 rmmod nvme_tcp 00:33:09.521 rmmod nvme_fabrics 00:33:09.521 rmmod nvme_keyring 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1663811 ']' 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1663811 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1663811 ']' 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1663811 00:33:09.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1663811) - No such process 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1663811 is not found' 00:33:09.521 Process with pid 1663811 is not found 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:09.521 07:20:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.420 07:20:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:11.420 00:33:11.420 real 0m35.187s 00:33:11.420 user 1m2.015s 00:33:11.420 sys 0m9.778s 00:33:11.420 07:20:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:11.420 07:20:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:11.420 ************************************ 00:33:11.420 END TEST nvmf_digest 00:33:11.420 ************************************ 00:33:11.420 07:20:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:11.420 07:20:40 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:11.420 07:20:40 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:11.420 07:20:40 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:11.420 07:20:40 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:11.420 07:20:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:11.420 07:20:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:11.420 07:20:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.679 ************************************ 00:33:11.679 START TEST nvmf_bdevperf 00:33:11.679 ************************************ 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:11.679 * Looking for test storage... 00:33:11.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.679 07:20:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:11.680 07:20:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.680 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:11.680 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:11.680 07:20:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:11.680 07:20:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:13.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:13.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:13.586 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:13.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.586 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:13.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:33:13.587 00:33:13.587 --- 10.0.0.2 ping statistics --- 00:33:13.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.587 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:33:13.587 00:33:13.587 --- 10.0.0.1 ping statistics --- 00:33:13.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.587 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1667635 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1667635 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1667635 ']' 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:13.587 07:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:13.587 [2024-07-13 07:20:43.007579] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:13.587 [2024-07-13 07:20:43.007674] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.848 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.848 [2024-07-13 07:20:43.048809] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:13.848 [2024-07-13 07:20:43.075284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:13.848 [2024-07-13 07:20:43.165498] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.848 [2024-07-13 07:20:43.165561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.848 [2024-07-13 07:20:43.165578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:13.848 [2024-07-13 07:20:43.165590] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:13.848 [2024-07-13 07:20:43.165600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.848 [2024-07-13 07:20:43.165964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:13.848 [2024-07-13 07:20:43.166022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:13.848 [2024-07-13 07:20:43.166025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.848 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:13.848 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:13.848 07:20:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:13.848 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:13.848 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:13.848 07:20:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.848 07:20:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:13.848 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.848 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:13.848 [2024-07-13 07:20:43.299058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.105 Malloc0 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.105 [2024-07-13 07:20:43.365862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:14.105 { 00:33:14.105 "params": { 00:33:14.105 "name": "Nvme$subsystem", 00:33:14.105 "trtype": "$TEST_TRANSPORT", 00:33:14.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:14.105 "adrfam": "ipv4", 00:33:14.105 "trsvcid": "$NVMF_PORT", 00:33:14.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:14.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:14.105 "hdgst": ${hdgst:-false}, 00:33:14.105 "ddgst": ${ddgst:-false} 00:33:14.105 }, 00:33:14.105 "method": "bdev_nvme_attach_controller" 00:33:14.105 } 00:33:14.105 EOF 00:33:14.105 )") 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:14.105 07:20:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:14.105 "params": { 00:33:14.105 "name": "Nvme1", 00:33:14.105 "trtype": "tcp", 00:33:14.105 "traddr": "10.0.0.2", 00:33:14.105 "adrfam": "ipv4", 00:33:14.105 "trsvcid": "4420", 00:33:14.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:14.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:14.105 "hdgst": false, 00:33:14.105 "ddgst": false 00:33:14.105 }, 00:33:14.105 "method": "bdev_nvme_attach_controller" 00:33:14.105 }' 00:33:14.105 [2024-07-13 07:20:43.415021] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:14.105 [2024-07-13 07:20:43.415090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667668 ] 00:33:14.105 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.105 [2024-07-13 07:20:43.446279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:14.105 [2024-07-13 07:20:43.475296] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.363 [2024-07-13 07:20:43.567106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.620 Running I/O for 1 seconds... 00:33:15.552 00:33:15.552 Latency(us) 00:33:15.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.552 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:15.552 Verification LBA range: start 0x0 length 0x4000 00:33:15.552 Nvme1n1 : 1.01 8642.79 33.76 0.00 0.00 14747.37 3203.98 18155.90 00:33:15.552 =================================================================================================================== 00:33:15.552 Total : 8642.79 33.76 0.00 0.00 14747.37 3203.98 18155.90 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1667926 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:15.810 { 00:33:15.810 "params": { 00:33:15.810 "name": "Nvme$subsystem", 00:33:15.810 "trtype": "$TEST_TRANSPORT", 00:33:15.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.810 "adrfam": "ipv4", 00:33:15.810 "trsvcid": "$NVMF_PORT", 00:33:15.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.810 "hdgst": ${hdgst:-false}, 00:33:15.810 "ddgst": ${ddgst:-false} 00:33:15.810 }, 00:33:15.810 "method": "bdev_nvme_attach_controller" 00:33:15.810 } 00:33:15.810 EOF 00:33:15.810 )") 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:15.810 07:20:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:15.810 "params": { 00:33:15.810 "name": "Nvme1", 00:33:15.810 "trtype": "tcp", 00:33:15.810 "traddr": "10.0.0.2", 00:33:15.810 "adrfam": "ipv4", 00:33:15.810 "trsvcid": "4420", 00:33:15.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:15.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:15.810 "hdgst": false, 00:33:15.810 "ddgst": false 00:33:15.810 }, 00:33:15.810 "method": "bdev_nvme_attach_controller" 00:33:15.810 }' 00:33:15.810 [2024-07-13 07:20:45.228811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:15.810 [2024-07-13 07:20:45.228923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667926 ] 00:33:15.810 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.810 [2024-07-13 07:20:45.260545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:16.068 [2024-07-13 07:20:45.290336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.068 [2024-07-13 07:20:45.377449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.325 Running I/O for 15 seconds... 00:33:18.857 07:20:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1667635 00:33:18.858 07:20:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:18.858 [2024-07-13 07:20:48.197036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.197986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.197999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.858 [2024-07-13 07:20:48.198523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.858 [2024-07-13 07:20:48.198539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.198977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.198992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.859 [2024-07-13 07:20:48.199821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.859 [2024-07-13 07:20:48.199853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.859 [2024-07-13 07:20:48.199894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.859 [2024-07-13 07:20:48.199943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.859 [2024-07-13 07:20:48.199978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.859 [2024-07-13 07:20:48.199993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.859 [2024-07-13 07:20:48.200007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.200975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.200990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.201004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.201033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.201067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.201097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.201128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.201174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.860 [2024-07-13 07:20:48.201205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.860 [2024-07-13 07:20:48.201237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.860 [2024-07-13 07:20:48.201269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.860 [2024-07-13 07:20:48.201300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.860 [2024-07-13 07:20:48.201336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.860 [2024-07-13 07:20:48.201370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.860 [2024-07-13 07:20:48.201401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.860 [2024-07-13 07:20:48.201434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.860 [2024-07-13 07:20:48.201450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4d60 is same with the state(5) to be set 00:33:18.860 [2024-07-13 07:20:48.201469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:18.861 [2024-07-13 07:20:48.201483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:18.861 [2024-07-13 07:20:48.201495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47776 len:8 PRP1 0x0 PRP2 0x0 00:33:18.861 [2024-07-13 07:20:48.201510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.861 [2024-07-13 07:20:48.201576] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ed4d60 was disconnected and freed. reset controller. 00:33:18.861 [2024-07-13 07:20:48.205740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.861 [2024-07-13 07:20:48.205818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:18.861 [2024-07-13 07:20:48.206537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.861 [2024-07-13 07:20:48.206566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:18.861 [2024-07-13 07:20:48.206597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:18.861 [2024-07-13 07:20:48.206844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:18.861 [2024-07-13 07:20:48.207095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:18.861 [2024-07-13 07:20:48.207118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:18.861 [2024-07-13 07:20:48.207135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.861 [2024-07-13 07:20:48.210725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:18.861 [2024-07-13 07:20:48.220017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.861 [2024-07-13 07:20:48.220465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.861 [2024-07-13 07:20:48.220508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:18.861 [2024-07-13 07:20:48.220525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:18.861 [2024-07-13 07:20:48.220774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:18.861 [2024-07-13 07:20:48.221037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:18.861 [2024-07-13 07:20:48.221062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:18.861 [2024-07-13 07:20:48.221077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.861 [2024-07-13 07:20:48.224646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:18.861 [2024-07-13 07:20:48.233923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.861 [2024-07-13 07:20:48.234383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.861 [2024-07-13 07:20:48.234410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:18.861 [2024-07-13 07:20:48.234439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:18.861 [2024-07-13 07:20:48.234692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:18.861 [2024-07-13 07:20:48.234946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:18.861 [2024-07-13 07:20:48.234972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:18.861 [2024-07-13 07:20:48.234987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.861 [2024-07-13 07:20:48.238555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:18.861 [2024-07-13 07:20:48.247839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.861 [2024-07-13 07:20:48.248285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.861 [2024-07-13 07:20:48.248316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:18.861 [2024-07-13 07:20:48.248333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:18.861 [2024-07-13 07:20:48.248571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:18.861 [2024-07-13 07:20:48.248813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:18.861 [2024-07-13 07:20:48.248837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:18.861 [2024-07-13 07:20:48.248852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.861 [2024-07-13 07:20:48.252431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:18.861 [2024-07-13 07:20:48.261722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.861 [2024-07-13 07:20:48.262186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.861 [2024-07-13 07:20:48.262218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:18.861 [2024-07-13 07:20:48.262237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:18.861 [2024-07-13 07:20:48.262475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:18.861 [2024-07-13 07:20:48.262717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:18.861 [2024-07-13 07:20:48.262741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:18.861 [2024-07-13 07:20:48.262757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.861 [2024-07-13 07:20:48.266336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:18.861 [2024-07-13 07:20:48.275627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.861 [2024-07-13 07:20:48.276091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.861 [2024-07-13 07:20:48.276123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:18.861 [2024-07-13 07:20:48.276140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:18.861 [2024-07-13 07:20:48.276378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:18.861 [2024-07-13 07:20:48.276620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:18.861 [2024-07-13 07:20:48.276643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:18.861 [2024-07-13 07:20:48.276658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.861 [2024-07-13 07:20:48.280230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:18.861 [2024-07-13 07:20:48.289500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.861 [2024-07-13 07:20:48.289925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.861 [2024-07-13 07:20:48.289957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:18.861 [2024-07-13 07:20:48.289976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:18.861 [2024-07-13 07:20:48.290214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:18.861 [2024-07-13 07:20:48.290456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:18.861 [2024-07-13 07:20:48.290480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:18.861 [2024-07-13 07:20:48.290495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.861 [2024-07-13 07:20:48.294069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:18.861 [2024-07-13 07:20:48.303546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.861 [2024-07-13 07:20:48.303992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.861 [2024-07-13 07:20:48.304034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:18.861 [2024-07-13 07:20:48.304050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:18.861 [2024-07-13 07:20:48.304307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:18.861 [2024-07-13 07:20:48.304549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:18.861 [2024-07-13 07:20:48.304573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:18.861 [2024-07-13 07:20:48.304589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.861 [2024-07-13 07:20:48.308377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.317513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.317982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.318012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.318047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.318363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.318597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.318619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.318633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.322280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.331307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.331730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.331761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.331778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.332010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.332237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.332258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.332272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.335520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.344577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.344988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.345016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.345046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.345301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.345499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.345519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.345531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.348535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.357776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.358174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.358217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.358232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.358485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.358684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.358708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.358721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.361727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.371067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.371560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.371588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.371604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.371859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.372087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.372107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.372121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.375096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.384405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.384814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.384843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.384859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.385110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.385326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.385345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.385358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.388338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.397609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.398038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.398066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.398082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.398324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.398537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.398557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.398569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.401543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.410836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.411319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.411347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.411363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.411607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.411821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.411840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.411877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.414833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.424117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.424540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.424568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.424583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.424826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.425060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.425082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.425095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.428073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.437350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.437764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.437793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.437809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.438059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.438278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.438297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.438310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.441287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.450563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.450979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.451008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.451029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.451259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.451488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.451509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.451523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.454952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.464318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.464753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.464782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.464798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.465020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.465252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.465273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.120 [2024-07-13 07:20:48.465286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.120 [2024-07-13 07:20:48.468722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.120 [2024-07-13 07:20:48.477703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.120 [2024-07-13 07:20:48.478149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.120 [2024-07-13 07:20:48.478178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.120 [2024-07-13 07:20:48.478194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.120 [2024-07-13 07:20:48.478434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.120 [2024-07-13 07:20:48.478633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.120 [2024-07-13 07:20:48.478652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.121 [2024-07-13 07:20:48.478665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.121 [2024-07-13 07:20:48.481782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.121 [2024-07-13 07:20:48.491058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.121 [2024-07-13 07:20:48.491461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.121 [2024-07-13 07:20:48.491503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.121 [2024-07-13 07:20:48.491519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.121 [2024-07-13 07:20:48.491777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.121 [2024-07-13 07:20:48.492026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.121 [2024-07-13 07:20:48.492059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.121 [2024-07-13 07:20:48.492074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.121 [2024-07-13 07:20:48.495051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.121 [2024-07-13 07:20:48.504330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.121 [2024-07-13 07:20:48.504712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.121 [2024-07-13 07:20:48.504740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.121 [2024-07-13 07:20:48.504756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.121 [2024-07-13 07:20:48.505007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.121 [2024-07-13 07:20:48.505227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.121 [2024-07-13 07:20:48.505247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.121 [2024-07-13 07:20:48.505259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.121 [2024-07-13 07:20:48.508238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.121 [2024-07-13 07:20:48.517544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.121 [2024-07-13 07:20:48.518008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.121 [2024-07-13 07:20:48.518037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.121 [2024-07-13 07:20:48.518053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.121 [2024-07-13 07:20:48.518293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.121 [2024-07-13 07:20:48.518493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.121 [2024-07-13 07:20:48.518512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.121 [2024-07-13 07:20:48.518533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.121 [2024-07-13 07:20:48.521512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.121 [2024-07-13 07:20:48.530780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.121 [2024-07-13 07:20:48.531274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.121 [2024-07-13 07:20:48.531315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.121 [2024-07-13 07:20:48.531331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.121 [2024-07-13 07:20:48.531588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.121 [2024-07-13 07:20:48.531788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.121 [2024-07-13 07:20:48.531807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.121 [2024-07-13 07:20:48.531819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.121 [2024-07-13 07:20:48.534797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.121 [2024-07-13 07:20:48.544101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.121 [2024-07-13 07:20:48.544594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.121 [2024-07-13 07:20:48.544623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.121 [2024-07-13 07:20:48.544639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.121 [2024-07-13 07:20:48.544887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.121 [2024-07-13 07:20:48.545100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.121 [2024-07-13 07:20:48.545120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.121 [2024-07-13 07:20:48.545133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.121 [2024-07-13 07:20:48.548111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.121 [2024-07-13 07:20:48.557407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.121 [2024-07-13 07:20:48.557880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.121 [2024-07-13 07:20:48.557909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.121 [2024-07-13 07:20:48.557925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.121 [2024-07-13 07:20:48.558153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.121 [2024-07-13 07:20:48.558371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.121 [2024-07-13 07:20:48.558391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.121 [2024-07-13 07:20:48.558403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.121 [2024-07-13 07:20:48.561392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.121 [2024-07-13 07:20:48.571005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.121 [2024-07-13 07:20:48.571455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.121 [2024-07-13 07:20:48.571498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.121 [2024-07-13 07:20:48.571514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.121 [2024-07-13 07:20:48.571803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.121 [2024-07-13 07:20:48.572050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.121 [2024-07-13 07:20:48.572078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.121 [2024-07-13 07:20:48.572103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.380 [2024-07-13 07:20:48.575440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.584289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.584705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.584734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.584765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.585019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.585239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.585258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.585271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.588248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.597532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.597928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.597958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.597975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.598217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.598415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.598434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.598447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.601421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.610917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.611349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.611391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.611407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.611661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.611901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.611922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.611935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.614911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.624174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.624591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.624617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.624647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.624878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.625082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.625102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.625120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.628096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.637371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.637779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.637808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.637824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.638076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.638295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.638314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.638327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.641318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.650572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.651001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.651029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.651045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.651286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.651484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.651504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.651516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.654522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.663773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.664207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.664236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.664251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.664480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.664700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.664720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.664733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.667698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.677134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.677549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.677595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.677613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.677855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.678084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.678104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.678117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.681094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.690364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.690784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.690811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.690841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.691069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.691286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.691306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.691318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.694292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.703540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.704023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.704052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.704069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.704310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.704524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.704544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.704557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.707964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.716782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.717213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.717241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.717257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.717497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.717700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.717720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.717732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.720709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.729987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.730431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.730472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.730489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.730732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.730962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.730983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.730997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.734008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.743287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.743707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.743735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.743767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.744035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.744274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.744294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.744306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.747280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.756534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.756946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.756975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.756992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.757246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.757445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.757464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.757477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.760473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.769786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.770237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.770265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.770296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.770533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.770731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.770750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.770763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.773762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.783062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.783560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.783588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.783604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.783844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.784058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.784078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.784091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.787061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.796324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.796732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.796760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.381 [2024-07-13 07:20:48.796776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.381 [2024-07-13 07:20:48.797028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.381 [2024-07-13 07:20:48.797248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.381 [2024-07-13 07:20:48.797267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.381 [2024-07-13 07:20:48.797280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.381 [2024-07-13 07:20:48.800254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.381 [2024-07-13 07:20:48.809552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.381 [2024-07-13 07:20:48.809974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.381 [2024-07-13 07:20:48.810002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.382 [2024-07-13 07:20:48.810023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.382 [2024-07-13 07:20:48.810265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.382 [2024-07-13 07:20:48.810465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.382 [2024-07-13 07:20:48.810484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.382 [2024-07-13 07:20:48.810496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.382 [2024-07-13 07:20:48.813474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.382 [2024-07-13 07:20:48.822907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.382 [2024-07-13 07:20:48.823264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.382 [2024-07-13 07:20:48.823291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.382 [2024-07-13 07:20:48.823307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.382 [2024-07-13 07:20:48.823528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.382 [2024-07-13 07:20:48.823742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.382 [2024-07-13 07:20:48.823762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.382 [2024-07-13 07:20:48.823774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.382 [2024-07-13 07:20:48.826735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.649 [2024-07-13 07:20:48.837196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.649 [2024-07-13 07:20:48.837679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.649 [2024-07-13 07:20:48.837722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.649 [2024-07-13 07:20:48.837750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.649 [2024-07-13 07:20:48.838048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.649 [2024-07-13 07:20:48.838339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.649 [2024-07-13 07:20:48.838371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.649 [2024-07-13 07:20:48.838400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.649 [2024-07-13 07:20:48.842660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.649 [2024-07-13 07:20:48.850468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.649 [2024-07-13 07:20:48.850864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.649 [2024-07-13 07:20:48.850900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.649 [2024-07-13 07:20:48.850920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.851149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.851363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.851388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.851401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.854419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.863808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.864211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.650 [2024-07-13 07:20:48.864241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.650 [2024-07-13 07:20:48.864257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.864503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.864701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.864720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.864732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.867711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.877211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.877638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.650 [2024-07-13 07:20:48.877666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.650 [2024-07-13 07:20:48.877681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.877913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.878118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.878138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.878150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.881123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.890392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.890842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.650 [2024-07-13 07:20:48.890879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.650 [2024-07-13 07:20:48.890901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.891128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.891360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.891380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.891392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.894368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.903622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.904042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.650 [2024-07-13 07:20:48.904071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.650 [2024-07-13 07:20:48.904087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.904328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.904542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.904562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.904574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.907549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.916829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.917319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.650 [2024-07-13 07:20:48.917347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.650 [2024-07-13 07:20:48.917364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.917604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.917818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.917837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.917850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.920823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.930118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.930565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.650 [2024-07-13 07:20:48.930607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.650 [2024-07-13 07:20:48.930624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.930874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.931094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.931115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.931128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.934101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.943380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.943744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.650 [2024-07-13 07:20:48.943771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.650 [2024-07-13 07:20:48.943788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.944056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.944275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.944295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.944308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.947300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.956721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.957175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.650 [2024-07-13 07:20:48.957203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.650 [2024-07-13 07:20:48.957220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.957434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.957652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.957674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.957688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.961020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.970110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.970522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.650 [2024-07-13 07:20:48.970549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.650 [2024-07-13 07:20:48.970565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.970819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.971064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.971086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.971100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.974221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.983299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.983698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.650 [2024-07-13 07:20:48.983727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.650 [2024-07-13 07:20:48.983743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.650 [2024-07-13 07:20:48.983997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.650 [2024-07-13 07:20:48.984217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.650 [2024-07-13 07:20:48.984236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.650 [2024-07-13 07:20:48.984253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.650 [2024-07-13 07:20:48.987244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.650 [2024-07-13 07:20:48.996596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.650 [2024-07-13 07:20:48.997037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.651 [2024-07-13 07:20:48.997079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.651 [2024-07-13 07:20:48.997095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.651 [2024-07-13 07:20:48.997332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.651 [2024-07-13 07:20:48.997530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.651 [2024-07-13 07:20:48.997550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.651 [2024-07-13 07:20:48.997563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.651 [2024-07-13 07:20:49.000540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.651 [2024-07-13 07:20:49.009840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.651 [2024-07-13 07:20:49.010292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.651 [2024-07-13 07:20:49.010320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.651 [2024-07-13 07:20:49.010336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.651 [2024-07-13 07:20:49.010577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.651 [2024-07-13 07:20:49.010792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.651 [2024-07-13 07:20:49.010811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.651 [2024-07-13 07:20:49.010824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.651 [2024-07-13 07:20:49.013802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.651 [2024-07-13 07:20:49.023063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.651 [2024-07-13 07:20:49.023468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.651 [2024-07-13 07:20:49.023496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.651 [2024-07-13 07:20:49.023511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.651 [2024-07-13 07:20:49.023762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.651 [2024-07-13 07:20:49.023988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.651 [2024-07-13 07:20:49.024009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.651 [2024-07-13 07:20:49.024022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.651 [2024-07-13 07:20:49.026994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.651 [2024-07-13 07:20:49.036264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.651 [2024-07-13 07:20:49.036643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.651 [2024-07-13 07:20:49.036686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.651 [2024-07-13 07:20:49.036702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.651 [2024-07-13 07:20:49.036979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.651 [2024-07-13 07:20:49.037178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.651 [2024-07-13 07:20:49.037197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.651 [2024-07-13 07:20:49.037210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.651 [2024-07-13 07:20:49.040179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.651 [2024-07-13 07:20:49.049463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.651 [2024-07-13 07:20:49.049889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.651 [2024-07-13 07:20:49.049917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.651 [2024-07-13 07:20:49.049932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.651 [2024-07-13 07:20:49.050167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.651 [2024-07-13 07:20:49.050366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.651 [2024-07-13 07:20:49.050385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.651 [2024-07-13 07:20:49.050398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.651 [2024-07-13 07:20:49.053375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.651 [2024-07-13 07:20:49.062651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.651 [2024-07-13 07:20:49.063093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.651 [2024-07-13 07:20:49.063121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.651 [2024-07-13 07:20:49.063137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.651 [2024-07-13 07:20:49.063391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.651 [2024-07-13 07:20:49.063589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.651 [2024-07-13 07:20:49.063609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.651 [2024-07-13 07:20:49.063621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.651 [2024-07-13 07:20:49.066596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.651 [2024-07-13 07:20:49.075869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.651 [2024-07-13 07:20:49.076328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.651 [2024-07-13 07:20:49.076354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.651 [2024-07-13 07:20:49.076370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.651 [2024-07-13 07:20:49.076595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.651 [2024-07-13 07:20:49.076800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.651 [2024-07-13 07:20:49.076834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.651 [2024-07-13 07:20:49.076847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.651 [2024-07-13 07:20:49.079831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.651 [2024-07-13 07:20:49.089105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.651 [2024-07-13 07:20:49.089554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.651 [2024-07-13 07:20:49.089597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.651 [2024-07-13 07:20:49.089613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.651 [2024-07-13 07:20:49.089855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.651 [2024-07-13 07:20:49.090083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.651 [2024-07-13 07:20:49.090103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.651 [2024-07-13 07:20:49.090116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.651 [2024-07-13 07:20:49.093091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.651 [2024-07-13 07:20:49.103064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.651 [2024-07-13 07:20:49.103532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.651 [2024-07-13 07:20:49.103572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.651 [2024-07-13 07:20:49.103603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.651 [2024-07-13 07:20:49.103839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.104071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.104103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.104127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.107315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.116390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.910 [2024-07-13 07:20:49.116784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.910 [2024-07-13 07:20:49.116813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.910 [2024-07-13 07:20:49.116829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.910 [2024-07-13 07:20:49.117094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.117311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.117331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.117348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.120362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.129692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.910 [2024-07-13 07:20:49.130086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.910 [2024-07-13 07:20:49.130115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.910 [2024-07-13 07:20:49.130132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.910 [2024-07-13 07:20:49.130374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.130574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.130593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.130605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.133581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.143043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.910 [2024-07-13 07:20:49.143496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.910 [2024-07-13 07:20:49.143539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.910 [2024-07-13 07:20:49.143556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.910 [2024-07-13 07:20:49.143797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.144031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.144052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.144066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.147058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.156336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.910 [2024-07-13 07:20:49.156813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.910 [2024-07-13 07:20:49.156842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.910 [2024-07-13 07:20:49.156858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.910 [2024-07-13 07:20:49.157106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.157322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.157341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.157353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.160326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.169586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.910 [2024-07-13 07:20:49.170045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.910 [2024-07-13 07:20:49.170079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.910 [2024-07-13 07:20:49.170096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.910 [2024-07-13 07:20:49.170337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.170551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.170570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.170582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.173558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.182851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.910 [2024-07-13 07:20:49.183289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.910 [2024-07-13 07:20:49.183332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.910 [2024-07-13 07:20:49.183348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.910 [2024-07-13 07:20:49.183599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.183797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.183816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.183829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.186804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.196079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.910 [2024-07-13 07:20:49.196517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.910 [2024-07-13 07:20:49.196543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.910 [2024-07-13 07:20:49.196573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.910 [2024-07-13 07:20:49.196807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.197035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.197055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.197068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.200041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.209333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.910 [2024-07-13 07:20:49.209761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.910 [2024-07-13 07:20:49.209789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.910 [2024-07-13 07:20:49.209805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.910 [2024-07-13 07:20:49.210028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.210279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.210314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.210328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.213673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.222779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.910 [2024-07-13 07:20:49.223547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.910 [2024-07-13 07:20:49.223576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.910 [2024-07-13 07:20:49.223592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.910 [2024-07-13 07:20:49.223846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.224084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.224107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.224121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.227204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.236133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.910 [2024-07-13 07:20:49.236567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.910 [2024-07-13 07:20:49.236609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.910 [2024-07-13 07:20:49.236626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.910 [2024-07-13 07:20:49.236878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.910 [2024-07-13 07:20:49.237105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.910 [2024-07-13 07:20:49.237125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.910 [2024-07-13 07:20:49.237138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.910 [2024-07-13 07:20:49.240136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.910 [2024-07-13 07:20:49.249404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.911 [2024-07-13 07:20:49.249827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.911 [2024-07-13 07:20:49.249853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.911 [2024-07-13 07:20:49.249896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.911 [2024-07-13 07:20:49.250125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.911 [2024-07-13 07:20:49.250340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.911 [2024-07-13 07:20:49.250359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.911 [2024-07-13 07:20:49.250372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.911 [2024-07-13 07:20:49.253353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.911 [2024-07-13 07:20:49.262691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.911 [2024-07-13 07:20:49.263158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.911 [2024-07-13 07:20:49.263186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.911 [2024-07-13 07:20:49.263202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.911 [2024-07-13 07:20:49.263443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.911 [2024-07-13 07:20:49.263641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.911 [2024-07-13 07:20:49.263660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.911 [2024-07-13 07:20:49.263673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.911 [2024-07-13 07:20:49.266673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.911 [2024-07-13 07:20:49.275961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.911 [2024-07-13 07:20:49.276399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.911 [2024-07-13 07:20:49.276426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.911 [2024-07-13 07:20:49.276442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.911 [2024-07-13 07:20:49.276695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.911 [2024-07-13 07:20:49.276941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.911 [2024-07-13 07:20:49.276963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.911 [2024-07-13 07:20:49.276976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.911 [2024-07-13 07:20:49.279964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.911 [2024-07-13 07:20:49.289222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.911 [2024-07-13 07:20:49.289691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.911 [2024-07-13 07:20:49.289719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.911 [2024-07-13 07:20:49.289736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.911 [2024-07-13 07:20:49.289989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.911 [2024-07-13 07:20:49.290209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.911 [2024-07-13 07:20:49.290229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.911 [2024-07-13 07:20:49.290241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.911 [2024-07-13 07:20:49.293213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.911 [2024-07-13 07:20:49.303479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.911 [2024-07-13 07:20:49.303963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.911 [2024-07-13 07:20:49.304000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.911 [2024-07-13 07:20:49.304034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.911 [2024-07-13 07:20:49.304330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.911 [2024-07-13 07:20:49.304608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.911 [2024-07-13 07:20:49.304637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.911 [2024-07-13 07:20:49.304662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.911 [2024-07-13 07:20:49.308419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.911 [2024-07-13 07:20:49.316687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.911 [2024-07-13 07:20:49.317172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.911 [2024-07-13 07:20:49.317202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.911 [2024-07-13 07:20:49.317219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.911 [2024-07-13 07:20:49.317461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.911 [2024-07-13 07:20:49.317675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.911 [2024-07-13 07:20:49.317695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.911 [2024-07-13 07:20:49.317707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.911 [2024-07-13 07:20:49.320686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.911 [2024-07-13 07:20:49.329960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.911 [2024-07-13 07:20:49.330401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.911 [2024-07-13 07:20:49.330431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.911 [2024-07-13 07:20:49.330447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.911 [2024-07-13 07:20:49.330701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.911 [2024-07-13 07:20:49.330927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.911 [2024-07-13 07:20:49.330948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.911 [2024-07-13 07:20:49.330961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.911 [2024-07-13 07:20:49.333963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.911 [2024-07-13 07:20:49.343243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.911 [2024-07-13 07:20:49.343713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.911 [2024-07-13 07:20:49.343741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.911 [2024-07-13 07:20:49.343757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.911 [2024-07-13 07:20:49.344010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.911 [2024-07-13 07:20:49.344230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.911 [2024-07-13 07:20:49.344254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.911 [2024-07-13 07:20:49.344267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.911 [2024-07-13 07:20:49.347244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.911 [2024-07-13 07:20:49.356500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:19.911 [2024-07-13 07:20:49.356884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.911 [2024-07-13 07:20:49.356912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:19.911 [2024-07-13 07:20:49.356929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:19.912 [2024-07-13 07:20:49.357177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:19.912 [2024-07-13 07:20:49.357391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.912 [2024-07-13 07:20:49.357411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.912 [2024-07-13 07:20:49.357423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.912 [2024-07-13 07:20:49.360666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.170 [2024-07-13 07:20:49.370608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.170 [2024-07-13 07:20:49.371071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.170 [2024-07-13 07:20:49.371113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.170 [2024-07-13 07:20:49.371130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.170 [2024-07-13 07:20:49.371374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.170 [2024-07-13 07:20:49.371626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.170 [2024-07-13 07:20:49.371650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.170 [2024-07-13 07:20:49.371666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.170 [2024-07-13 07:20:49.375253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.170 [2024-07-13 07:20:49.384539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.170 [2024-07-13 07:20:49.384982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.170 [2024-07-13 07:20:49.385024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.170 [2024-07-13 07:20:49.385041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.170 [2024-07-13 07:20:49.385301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.170 [2024-07-13 07:20:49.385544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.170 [2024-07-13 07:20:49.385568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.170 [2024-07-13 07:20:49.385582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.170 [2024-07-13 07:20:49.389164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.170 [2024-07-13 07:20:49.398436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.170 [2024-07-13 07:20:49.398878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.170 [2024-07-13 07:20:49.398909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.170 [2024-07-13 07:20:49.398927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.170 [2024-07-13 07:20:49.399165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.170 [2024-07-13 07:20:49.399408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.170 [2024-07-13 07:20:49.399432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.170 [2024-07-13 07:20:49.399447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.170 [2024-07-13 07:20:49.403025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.170 [2024-07-13 07:20:49.412294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.170 [2024-07-13 07:20:49.412727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.170 [2024-07-13 07:20:49.412766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.170 [2024-07-13 07:20:49.412781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.170 [2024-07-13 07:20:49.413064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.170 [2024-07-13 07:20:49.413308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.170 [2024-07-13 07:20:49.413331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.170 [2024-07-13 07:20:49.413346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.170 [2024-07-13 07:20:49.416917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.170 [2024-07-13 07:20:49.426192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.170 [2024-07-13 07:20:49.426593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.170 [2024-07-13 07:20:49.426624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.170 [2024-07-13 07:20:49.426641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.170 [2024-07-13 07:20:49.426890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.170 [2024-07-13 07:20:49.427134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.170 [2024-07-13 07:20:49.427157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.170 [2024-07-13 07:20:49.427172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.170 [2024-07-13 07:20:49.430739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.170 [2024-07-13 07:20:49.440233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.170 [2024-07-13 07:20:49.440730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.170 [2024-07-13 07:20:49.440782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.170 [2024-07-13 07:20:49.440800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.170 [2024-07-13 07:20:49.441059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.170 [2024-07-13 07:20:49.441303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.170 [2024-07-13 07:20:49.441327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.170 [2024-07-13 07:20:49.441342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.170 [2024-07-13 07:20:49.444922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.170 [2024-07-13 07:20:49.454217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.170 [2024-07-13 07:20:49.454646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.170 [2024-07-13 07:20:49.454677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.170 [2024-07-13 07:20:49.454694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.170 [2024-07-13 07:20:49.454944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.170 [2024-07-13 07:20:49.455187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.171 [2024-07-13 07:20:49.455211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.171 [2024-07-13 07:20:49.455226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.171 [2024-07-13 07:20:49.458794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.171 [2024-07-13 07:20:49.468078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.171 [2024-07-13 07:20:49.468499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.171 [2024-07-13 07:20:49.468527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.171 [2024-07-13 07:20:49.468543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.171 [2024-07-13 07:20:49.468784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.171 [2024-07-13 07:20:49.469051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.171 [2024-07-13 07:20:49.469076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.171 [2024-07-13 07:20:49.469091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.171 [2024-07-13 07:20:49.472656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.171 [2024-07-13 07:20:49.481942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.171 [2024-07-13 07:20:49.482370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.171 [2024-07-13 07:20:49.482401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.171 [2024-07-13 07:20:49.482418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.171 [2024-07-13 07:20:49.482656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.171 [2024-07-13 07:20:49.482908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.171 [2024-07-13 07:20:49.482933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.171 [2024-07-13 07:20:49.482954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.171 [2024-07-13 07:20:49.486518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.171 [2024-07-13 07:20:49.495788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.171 [2024-07-13 07:20:49.496224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.171 [2024-07-13 07:20:49.496255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.171 [2024-07-13 07:20:49.496273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.171 [2024-07-13 07:20:49.496511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.171 [2024-07-13 07:20:49.496752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.171 [2024-07-13 07:20:49.496776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.171 [2024-07-13 07:20:49.496791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.171 [2024-07-13 07:20:49.500368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.171 [2024-07-13 07:20:49.509648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.171 [2024-07-13 07:20:49.510093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.171 [2024-07-13 07:20:49.510121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.171 [2024-07-13 07:20:49.510137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.171 [2024-07-13 07:20:49.510389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.171 [2024-07-13 07:20:49.510632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.171 [2024-07-13 07:20:49.510655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.171 [2024-07-13 07:20:49.510670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.171 [2024-07-13 07:20:49.514247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.171 [2024-07-13 07:20:49.523520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.171 [2024-07-13 07:20:49.523954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.171 [2024-07-13 07:20:49.523996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.171 [2024-07-13 07:20:49.524011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.171 [2024-07-13 07:20:49.524261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.171 [2024-07-13 07:20:49.524503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.171 [2024-07-13 07:20:49.524527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.171 [2024-07-13 07:20:49.524541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.171 [2024-07-13 07:20:49.528119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.171 [2024-07-13 07:20:49.537393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.171 [2024-07-13 07:20:49.537852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.171 [2024-07-13 07:20:49.537892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.171 [2024-07-13 07:20:49.537911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.171 [2024-07-13 07:20:49.538149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.171 [2024-07-13 07:20:49.538391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.171 [2024-07-13 07:20:49.538415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.171 [2024-07-13 07:20:49.538430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.171 [2024-07-13 07:20:49.542011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.171 [2024-07-13 07:20:49.551288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.171 [2024-07-13 07:20:49.551693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.171 [2024-07-13 07:20:49.551724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.171 [2024-07-13 07:20:49.551741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.171 [2024-07-13 07:20:49.551991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.171 [2024-07-13 07:20:49.552235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.171 [2024-07-13 07:20:49.552258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.171 [2024-07-13 07:20:49.552273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.171 [2024-07-13 07:20:49.555842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.171 [2024-07-13 07:20:49.565162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.171 [2024-07-13 07:20:49.565590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.171 [2024-07-13 07:20:49.565621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.171 [2024-07-13 07:20:49.565640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.171 [2024-07-13 07:20:49.565890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.172 [2024-07-13 07:20:49.566133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.172 [2024-07-13 07:20:49.566157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.172 [2024-07-13 07:20:49.566172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.172 [2024-07-13 07:20:49.569739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.172 [2024-07-13 07:20:49.579024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.172 [2024-07-13 07:20:49.579461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.172 [2024-07-13 07:20:49.579493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.172 [2024-07-13 07:20:49.579510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.172 [2024-07-13 07:20:49.579753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.172 [2024-07-13 07:20:49.580009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.172 [2024-07-13 07:20:49.580034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.172 [2024-07-13 07:20:49.580049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.172 [2024-07-13 07:20:49.583615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.172 [2024-07-13 07:20:49.592918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.172 [2024-07-13 07:20:49.593362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.172 [2024-07-13 07:20:49.593393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.172 [2024-07-13 07:20:49.593411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.172 [2024-07-13 07:20:49.593648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.172 [2024-07-13 07:20:49.593902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.172 [2024-07-13 07:20:49.593927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.172 [2024-07-13 07:20:49.593942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.172 [2024-07-13 07:20:49.597512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.172 [2024-07-13 07:20:49.606813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.172 [2024-07-13 07:20:49.607254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.172 [2024-07-13 07:20:49.607286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.172 [2024-07-13 07:20:49.607304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.172 [2024-07-13 07:20:49.607541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.172 [2024-07-13 07:20:49.607783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.172 [2024-07-13 07:20:49.607807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.172 [2024-07-13 07:20:49.607823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.172 [2024-07-13 07:20:49.611405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.172 [2024-07-13 07:20:49.620777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.172 [2024-07-13 07:20:49.621245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.172 [2024-07-13 07:20:49.621289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.172 [2024-07-13 07:20:49.621306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.172 [2024-07-13 07:20:49.621563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.172 [2024-07-13 07:20:49.621832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.172 [2024-07-13 07:20:49.621879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.172 [2024-07-13 07:20:49.621910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.625698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.634734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.635154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.635187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.635206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.635444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.635687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.635710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.635725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.639301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.648592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.649007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.649039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.649057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.649295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.649538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.649561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.649576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.653150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.662628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.663062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.663094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.663112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.663350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.663592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.663615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.663631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.667206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.676482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.676923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.676971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.676988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.677232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.677475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.677499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.677514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.681077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.690368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.690793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.690860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.690890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.691129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.691371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.691395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.691410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.695068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.704363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.704743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.704775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.704793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.705041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.705285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.705308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.705323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.708897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.718379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.718787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.718818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.718836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.719085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.719334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.719358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.719373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.722951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.732261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.732685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.732751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.732769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.733020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.733263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.733286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.733302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.736882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.746183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.746641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.746683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.746701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.746959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.747179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.747198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.747210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.750789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.760094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.760563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.760617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.760635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.760882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.761125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.761148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.761163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.764729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.774035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.774458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.774489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.774506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.774744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.774996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.775020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.775035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.778599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.787915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.788326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.430 [2024-07-13 07:20:49.788358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.430 [2024-07-13 07:20:49.788375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.430 [2024-07-13 07:20:49.788614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.430 [2024-07-13 07:20:49.788856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.430 [2024-07-13 07:20:49.788891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.430 [2024-07-13 07:20:49.788908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.430 [2024-07-13 07:20:49.792475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.430 [2024-07-13 07:20:49.801756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.430 [2024-07-13 07:20:49.802170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.431 [2024-07-13 07:20:49.802201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.431 [2024-07-13 07:20:49.802219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.431 [2024-07-13 07:20:49.802456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.431 [2024-07-13 07:20:49.802698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.431 [2024-07-13 07:20:49.802721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.431 [2024-07-13 07:20:49.802736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.431 [2024-07-13 07:20:49.806320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.431 [2024-07-13 07:20:49.815601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.431 [2024-07-13 07:20:49.816036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.431 [2024-07-13 07:20:49.816067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.431 [2024-07-13 07:20:49.816091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.431 [2024-07-13 07:20:49.816329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.431 [2024-07-13 07:20:49.816572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.431 [2024-07-13 07:20:49.816595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.431 [2024-07-13 07:20:49.816610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.431 [2024-07-13 07:20:49.820188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.431 [2024-07-13 07:20:49.829468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.431 [2024-07-13 07:20:49.829917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.431 [2024-07-13 07:20:49.829945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.431 [2024-07-13 07:20:49.829961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.431 [2024-07-13 07:20:49.830209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.431 [2024-07-13 07:20:49.830452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.431 [2024-07-13 07:20:49.830475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.431 [2024-07-13 07:20:49.830490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.431 [2024-07-13 07:20:49.834069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.431 [2024-07-13 07:20:49.843351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.431 [2024-07-13 07:20:49.843797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.431 [2024-07-13 07:20:49.843824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.431 [2024-07-13 07:20:49.843856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.431 [2024-07-13 07:20:49.844116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.431 [2024-07-13 07:20:49.844358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.431 [2024-07-13 07:20:49.844382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.431 [2024-07-13 07:20:49.844398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.431 [2024-07-13 07:20:49.847972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.431 [2024-07-13 07:20:49.857239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.431 [2024-07-13 07:20:49.857664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.431 [2024-07-13 07:20:49.857695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.431 [2024-07-13 07:20:49.857712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.431 [2024-07-13 07:20:49.857961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.431 [2024-07-13 07:20:49.858204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.431 [2024-07-13 07:20:49.858234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.431 [2024-07-13 07:20:49.858250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.431 [2024-07-13 07:20:49.861818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.431 [2024-07-13 07:20:49.871099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.431 [2024-07-13 07:20:49.871536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.431 [2024-07-13 07:20:49.871567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.431 [2024-07-13 07:20:49.871585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.431 [2024-07-13 07:20:49.871823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.431 [2024-07-13 07:20:49.872075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.431 [2024-07-13 07:20:49.872100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.431 [2024-07-13 07:20:49.872115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.431 [2024-07-13 07:20:49.875680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.729 [2024-07-13 07:20:49.885312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.729 [2024-07-13 07:20:49.885745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.729 [2024-07-13 07:20:49.885778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.729 [2024-07-13 07:20:49.885796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.729 [2024-07-13 07:20:49.886046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.729 [2024-07-13 07:20:49.886289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.729 [2024-07-13 07:20:49.886313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.729 [2024-07-13 07:20:49.886328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.729 [2024-07-13 07:20:49.890062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.729 [2024-07-13 07:20:49.899335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.729 [2024-07-13 07:20:49.899773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.729 [2024-07-13 07:20:49.899800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.729 [2024-07-13 07:20:49.899831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.729 [2024-07-13 07:20:49.900067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.729 [2024-07-13 07:20:49.900310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.729 [2024-07-13 07:20:49.900334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.729 [2024-07-13 07:20:49.900349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.729 [2024-07-13 07:20:49.903921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.729 [2024-07-13 07:20:49.913202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.729 [2024-07-13 07:20:49.913643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.729 [2024-07-13 07:20:49.913674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.729 [2024-07-13 07:20:49.913692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.729 [2024-07-13 07:20:49.913942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.729 [2024-07-13 07:20:49.914186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.729 [2024-07-13 07:20:49.914209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.729 [2024-07-13 07:20:49.914225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.729 [2024-07-13 07:20:49.917790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.729 [2024-07-13 07:20:49.927080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.729 [2024-07-13 07:20:49.927483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.729 [2024-07-13 07:20:49.927515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.729 [2024-07-13 07:20:49.927533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.729 [2024-07-13 07:20:49.927770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.729 [2024-07-13 07:20:49.928030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.729 [2024-07-13 07:20:49.928055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.729 [2024-07-13 07:20:49.928071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.729 [2024-07-13 07:20:49.931641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.729 [2024-07-13 07:20:49.940926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.729 [2024-07-13 07:20:49.941339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.729 [2024-07-13 07:20:49.941370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.729 [2024-07-13 07:20:49.941388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.729 [2024-07-13 07:20:49.941626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.729 [2024-07-13 07:20:49.941879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.729 [2024-07-13 07:20:49.941903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.729 [2024-07-13 07:20:49.941919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.729 [2024-07-13 07:20:49.945485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.729 [2024-07-13 07:20:49.954765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.729 [2024-07-13 07:20:49.955198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.729 [2024-07-13 07:20:49.955230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.729 [2024-07-13 07:20:49.955248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.729 [2024-07-13 07:20:49.955491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.729 [2024-07-13 07:20:49.955734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.729 [2024-07-13 07:20:49.955757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.729 [2024-07-13 07:20:49.955772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.729 [2024-07-13 07:20:49.959347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.729 [2024-07-13 07:20:49.968623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.729 [2024-07-13 07:20:49.969035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.729 [2024-07-13 07:20:49.969066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.729 [2024-07-13 07:20:49.969084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.729 [2024-07-13 07:20:49.969322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.729 [2024-07-13 07:20:49.969563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.729 [2024-07-13 07:20:49.969587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.729 [2024-07-13 07:20:49.969602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.729 [2024-07-13 07:20:49.973177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:49.982660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:49.983048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:49.983080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:49.983097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:49.983335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:49.983576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:49.983600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:49.983615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:49.987190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:49.996674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:49.997084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:49.997115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:49.997133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:49.997370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:49.997612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:49.997635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:49.997656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.001238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.010534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.010950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.010982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.011001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.011239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.011481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.011504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.011520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.015098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.024379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.024788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.024820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.024838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.025084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.025328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.025353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.025368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.028943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.038562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.039030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.039064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.039083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.039323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.039566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.039589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.039605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.043189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.052464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.052912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.052945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.052964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.053204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.053446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.053470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.053485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.057059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.066334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.066760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.066793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.066811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.067061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.067305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.067328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.067343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.070921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.080217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.080658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.080701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.080717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.081000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.081244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.081268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.081283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.084849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.094138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.094546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.094578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.094596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.094833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.095090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.095115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.095131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.098695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.107973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.108404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.108435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.108452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.108690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.108942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.108967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.108982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.112546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.121826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.122237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.122269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.122287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.122525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.122767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.122790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.122805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.126385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.135688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.136164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.136196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.136213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.136451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.136693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.136717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.136732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.140330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.149614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.150032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.150065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.150083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.150321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.150563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.150587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.150602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.154178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.163450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.163948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.163976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.163992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.164239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.164482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.164505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.164521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.168095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.730 [2024-07-13 07:20:50.177369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.730 [2024-07-13 07:20:50.177783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.730 [2024-07-13 07:20:50.177814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.730 [2024-07-13 07:20:50.177831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.730 [2024-07-13 07:20:50.178079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.730 [2024-07-13 07:20:50.178322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.730 [2024-07-13 07:20:50.178345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.730 [2024-07-13 07:20:50.178360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.730 [2024-07-13 07:20:50.182160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.191340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.191769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.191797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.191820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.192094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.192338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.192362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.192377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.195956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.205242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.205684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.205711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.205730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.205991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.206234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.206257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.206272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.209845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.219127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.219544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.219576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.219594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.219832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.220087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.220113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.220133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.223699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.233017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.233460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.233492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.233510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.233748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.234008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.234034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.234050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.237619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.246944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.247375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.247408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.247426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.247665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.247921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.247947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.247964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.251531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.260812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.261272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.261306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.261324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.261563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.261805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.261830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.261846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.265427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.274705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.275123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.275156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.275174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.275411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.275653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.275678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.275693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.279274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.288560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.288966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.288999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.289018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.289258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.289502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.289528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.289544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.293121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.302391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.302815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.302846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.302874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.303116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.303358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.303382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.303398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.306972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.316246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.316667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.316699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.316717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.316967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.317211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.317236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.317252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.320821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.330101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.330515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.330547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.330571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.330811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.331066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.331092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.331109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.334678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.343970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.344388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.344420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.344438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.344677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.344932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.344958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.344974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.348542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.357829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.358263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.358296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.358315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.358554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.358797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.989 [2024-07-13 07:20:50.358823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.989 [2024-07-13 07:20:50.358839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.989 [2024-07-13 07:20:50.362419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.989 [2024-07-13 07:20:50.371594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.989 [2024-07-13 07:20:50.372020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.989 [2024-07-13 07:20:50.372050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.989 [2024-07-13 07:20:50.372067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.989 [2024-07-13 07:20:50.372314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.989 [2024-07-13 07:20:50.372557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.990 [2024-07-13 07:20:50.372587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.990 [2024-07-13 07:20:50.372604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.990 [2024-07-13 07:20:50.376179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.990 [2024-07-13 07:20:50.385563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.990 [2024-07-13 07:20:50.386020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.990 [2024-07-13 07:20:50.386050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.990 [2024-07-13 07:20:50.386067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.990 [2024-07-13 07:20:50.386314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.990 [2024-07-13 07:20:50.386557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.990 [2024-07-13 07:20:50.386581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.990 [2024-07-13 07:20:50.386597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.990 [2024-07-13 07:20:50.390181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.990 [2024-07-13 07:20:50.399546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.990 [2024-07-13 07:20:50.399975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.990 [2024-07-13 07:20:50.400005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.990 [2024-07-13 07:20:50.400021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.990 [2024-07-13 07:20:50.400264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.990 [2024-07-13 07:20:50.400508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.990 [2024-07-13 07:20:50.400532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.990 [2024-07-13 07:20:50.400548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.990 [2024-07-13 07:20:50.404129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.990 [2024-07-13 07:20:50.413395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.990 [2024-07-13 07:20:50.413907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.990 [2024-07-13 07:20:50.413936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.990 [2024-07-13 07:20:50.413953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.990 [2024-07-13 07:20:50.414201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.990 [2024-07-13 07:20:50.414445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.990 [2024-07-13 07:20:50.414470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.990 [2024-07-13 07:20:50.414486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.990 [2024-07-13 07:20:50.418063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.990 [2024-07-13 07:20:50.427334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.990 [2024-07-13 07:20:50.427762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.990 [2024-07-13 07:20:50.427789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.990 [2024-07-13 07:20:50.427805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.990 [2024-07-13 07:20:50.428061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.990 [2024-07-13 07:20:50.428305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.990 [2024-07-13 07:20:50.428330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.990 [2024-07-13 07:20:50.428346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.990 [2024-07-13 07:20:50.431920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.990 [2024-07-13 07:20:50.441382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.990 [2024-07-13 07:20:50.441847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.990 [2024-07-13 07:20:50.441888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:20.990 [2024-07-13 07:20:50.441909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:20.990 [2024-07-13 07:20:50.442154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:20.990 [2024-07-13 07:20:50.442425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.990 [2024-07-13 07:20:50.442461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.990 [2024-07-13 07:20:50.442489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.247 [2024-07-13 07:20:50.446229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.455408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.455845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.455885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.455906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.456145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.456389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.456413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.456430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.460004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.469271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.469689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.469721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.469740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.469996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.470240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.470265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.470281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.473846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.483130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.483536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.483563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.483579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.483805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.484058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.484083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.484100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.487672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.497161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.497669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.497700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.497718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.497967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.498210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.498235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.498251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.501816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.511094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.511499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.511532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.511550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.511789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.512044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.512070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.512091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.515657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.524962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.525398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.525430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.525448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.525687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.525940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.525966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.525983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.529548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.538822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.539243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.539271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.539287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.539529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.539773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.539797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.539813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.543396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.552661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.553093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.553125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.553144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.553382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.553624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.553649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.553664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.557239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.566506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.566932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.566969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.566988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.567227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.567470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.567495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.567511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.571086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.580354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.580787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.580816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.580831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.581088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.581333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.581357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.581373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.584945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.594213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.594647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.594674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.594690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.594948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.595191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.595216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.595231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.598800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.608083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.608507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.608538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.608556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.608794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.609053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.609079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.609095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.612661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.621941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.622341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.622373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.622391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.622630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.622882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.622907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.622923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.626488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.635975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.636409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.636442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.636460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.636698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.636951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.636976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.636992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.640560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.649828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.650255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.650288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.650306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.650544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.650787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.650812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.650828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.654538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.663800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.664220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.664252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.664271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.664509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.664752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.664777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.664793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.668373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.677657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.678113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.678146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.678165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.678403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.678645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.678670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.678686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.682261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.248 [2024-07-13 07:20:50.691529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.248 [2024-07-13 07:20:50.691956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.248 [2024-07-13 07:20:50.691988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.248 [2024-07-13 07:20:50.692006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.248 [2024-07-13 07:20:50.692245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.248 [2024-07-13 07:20:50.692488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.248 [2024-07-13 07:20:50.692512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.248 [2024-07-13 07:20:50.692528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.248 [2024-07-13 07:20:50.696103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.507 [2024-07-13 07:20:50.705642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.507 [2024-07-13 07:20:50.706060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.507 [2024-07-13 07:20:50.706095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.507 [2024-07-13 07:20:50.706120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.507 [2024-07-13 07:20:50.706360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.507 [2024-07-13 07:20:50.706605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.507 [2024-07-13 07:20:50.706629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.507 [2024-07-13 07:20:50.706645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.507 [2024-07-13 07:20:50.710234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.507 [2024-07-13 07:20:50.719525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.507 [2024-07-13 07:20:50.719946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.507 [2024-07-13 07:20:50.719979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.507 [2024-07-13 07:20:50.719997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.507 [2024-07-13 07:20:50.720236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.507 [2024-07-13 07:20:50.720479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.507 [2024-07-13 07:20:50.720503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.507 [2024-07-13 07:20:50.720520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.507 [2024-07-13 07:20:50.724094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.507 [2024-07-13 07:20:50.733368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.507 [2024-07-13 07:20:50.733797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.507 [2024-07-13 07:20:50.733828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.507 [2024-07-13 07:20:50.733846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.507 [2024-07-13 07:20:50.734093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.507 [2024-07-13 07:20:50.734336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.507 [2024-07-13 07:20:50.734360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.507 [2024-07-13 07:20:50.734376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.737949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.747242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.747668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.747699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.747718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.747966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.748210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.748239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.508 [2024-07-13 07:20:50.748257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.751823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.761105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.761534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.761566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.761583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.761822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.762073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.762099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.508 [2024-07-13 07:20:50.762114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.765681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.774991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.775393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.775439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.775458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.775696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.775951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.775977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.508 [2024-07-13 07:20:50.775992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.779559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.788846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.789263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.789294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.789312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.789550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.789793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.789818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.508 [2024-07-13 07:20:50.789834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.793411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.802695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.803110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.803142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.803161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.803400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.803642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.803666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.508 [2024-07-13 07:20:50.803682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.807259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.816537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.816946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.816979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.816998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.817237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.817479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.817504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.508 [2024-07-13 07:20:50.817520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.821097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.830373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.830801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.830833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.830850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.831097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.831341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.831366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.508 [2024-07-13 07:20:50.831382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.834955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.844238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.844664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.844695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.844720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.844970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.845214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.845239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.508 [2024-07-13 07:20:50.845255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.848823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.858105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.858531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.858564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.858582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.858820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.859074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.859099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.508 [2024-07-13 07:20:50.859115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.862683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.871965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.872408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.872439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.872458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.872697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.872950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.872975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.508 [2024-07-13 07:20:50.872990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.508 [2024-07-13 07:20:50.876559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.508 [2024-07-13 07:20:50.885840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.508 [2024-07-13 07:20:50.886296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.508 [2024-07-13 07:20:50.886329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.508 [2024-07-13 07:20:50.886347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.508 [2024-07-13 07:20:50.886585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.508 [2024-07-13 07:20:50.886829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.508 [2024-07-13 07:20:50.886853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.509 [2024-07-13 07:20:50.886884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.509 [2024-07-13 07:20:50.890452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.509 [2024-07-13 07:20:50.899727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.509 [2024-07-13 07:20:50.900139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.509 [2024-07-13 07:20:50.900170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.509 [2024-07-13 07:20:50.900189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.509 [2024-07-13 07:20:50.900427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.509 [2024-07-13 07:20:50.900669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.509 [2024-07-13 07:20:50.900694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.509 [2024-07-13 07:20:50.900710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.509 [2024-07-13 07:20:50.904285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.509 [2024-07-13 07:20:50.913763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.509 [2024-07-13 07:20:50.914248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.509 [2024-07-13 07:20:50.914280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.509 [2024-07-13 07:20:50.914298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.509 [2024-07-13 07:20:50.914536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.509 [2024-07-13 07:20:50.914778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.509 [2024-07-13 07:20:50.914803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.509 [2024-07-13 07:20:50.914819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.509 [2024-07-13 07:20:50.918392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.509 [2024-07-13 07:20:50.927662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.509 [2024-07-13 07:20:50.928093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.509 [2024-07-13 07:20:50.928126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.509 [2024-07-13 07:20:50.928144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.509 [2024-07-13 07:20:50.928382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.509 [2024-07-13 07:20:50.928625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.509 [2024-07-13 07:20:50.928649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.509 [2024-07-13 07:20:50.928665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.509 [2024-07-13 07:20:50.932238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.509 [2024-07-13 07:20:50.941516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.509 [2024-07-13 07:20:50.941951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.509 [2024-07-13 07:20:50.941984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.509 [2024-07-13 07:20:50.942002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.509 [2024-07-13 07:20:50.942240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.509 [2024-07-13 07:20:50.942483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.509 [2024-07-13 07:20:50.942508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.509 [2024-07-13 07:20:50.942523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.509 [2024-07-13 07:20:50.946101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.509 [2024-07-13 07:20:50.955369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.509 [2024-07-13 07:20:50.955815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.509 [2024-07-13 07:20:50.955847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.509 [2024-07-13 07:20:50.955874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.509 [2024-07-13 07:20:50.956115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.509 [2024-07-13 07:20:50.956358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.509 [2024-07-13 07:20:50.956383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.509 [2024-07-13 07:20:50.956399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.509 [2024-07-13 07:20:50.960189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.769 [2024-07-13 07:20:50.969338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.769 [2024-07-13 07:20:50.969783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.769 [2024-07-13 07:20:50.969817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.769 [2024-07-13 07:20:50.969835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.769 [2024-07-13 07:20:50.970089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.769 [2024-07-13 07:20:50.970334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.769 [2024-07-13 07:20:50.970358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.769 [2024-07-13 07:20:50.970375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.769 [2024-07-13 07:20:50.973948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.769 [2024-07-13 07:20:50.983226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.769 [2024-07-13 07:20:50.983656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.769 [2024-07-13 07:20:50.983689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.769 [2024-07-13 07:20:50.983708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.769 [2024-07-13 07:20:50.983961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.769 [2024-07-13 07:20:50.984206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.769 [2024-07-13 07:20:50.984231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.769 [2024-07-13 07:20:50.984246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.769 [2024-07-13 07:20:50.987816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.769 [2024-07-13 07:20:50.997099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.769 [2024-07-13 07:20:50.997527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.769 [2024-07-13 07:20:50.997559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.769 [2024-07-13 07:20:50.997577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.769 [2024-07-13 07:20:50.997816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.769 [2024-07-13 07:20:50.998069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.769 [2024-07-13 07:20:50.998095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.769 [2024-07-13 07:20:50.998110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.769 [2024-07-13 07:20:51.001676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.769 [2024-07-13 07:20:51.010970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.769 [2024-07-13 07:20:51.011397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.769 [2024-07-13 07:20:51.011430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.769 [2024-07-13 07:20:51.011448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.769 [2024-07-13 07:20:51.011686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.769 [2024-07-13 07:20:51.011940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.769 [2024-07-13 07:20:51.011965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.769 [2024-07-13 07:20:51.011981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.769 [2024-07-13 07:20:51.015544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.769 [2024-07-13 07:20:51.024818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.769 [2024-07-13 07:20:51.025252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.769 [2024-07-13 07:20:51.025284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.769 [2024-07-13 07:20:51.025302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.769 [2024-07-13 07:20:51.025540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.769 [2024-07-13 07:20:51.025784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.769 [2024-07-13 07:20:51.025809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.769 [2024-07-13 07:20:51.025830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.769 [2024-07-13 07:20:51.029405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.769 [2024-07-13 07:20:51.038671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.769 [2024-07-13 07:20:51.039115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.769 [2024-07-13 07:20:51.039147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.769 [2024-07-13 07:20:51.039165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.769 [2024-07-13 07:20:51.039404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.769 [2024-07-13 07:20:51.039646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.769 [2024-07-13 07:20:51.039671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.769 [2024-07-13 07:20:51.039686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.769 [2024-07-13 07:20:51.043266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.769 [2024-07-13 07:20:51.052541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.769 [2024-07-13 07:20:51.052965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.769 [2024-07-13 07:20:51.052998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.769 [2024-07-13 07:20:51.053016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.769 [2024-07-13 07:20:51.053255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.769 [2024-07-13 07:20:51.053498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.769 [2024-07-13 07:20:51.053522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.769 [2024-07-13 07:20:51.053538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.769 [2024-07-13 07:20:51.057114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.769 [2024-07-13 07:20:51.066389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.769 [2024-07-13 07:20:51.066826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.769 [2024-07-13 07:20:51.066858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.769 [2024-07-13 07:20:51.066887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.769 [2024-07-13 07:20:51.067126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.769 [2024-07-13 07:20:51.067370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.769 [2024-07-13 07:20:51.067394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.769 [2024-07-13 07:20:51.067410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.769 [2024-07-13 07:20:51.070984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.769 [2024-07-13 07:20:51.080273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.769 [2024-07-13 07:20:51.080711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.769 [2024-07-13 07:20:51.080749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.769 [2024-07-13 07:20:51.080768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.769 [2024-07-13 07:20:51.081018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.769 [2024-07-13 07:20:51.081262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.769 [2024-07-13 07:20:51.081287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.769 [2024-07-13 07:20:51.081303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.769 [2024-07-13 07:20:51.084875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.770 [2024-07-13 07:20:51.094144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.770 [2024-07-13 07:20:51.094561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.770 [2024-07-13 07:20:51.094593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.770 [2024-07-13 07:20:51.094611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.770 [2024-07-13 07:20:51.094850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.770 [2024-07-13 07:20:51.095105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.770 [2024-07-13 07:20:51.095130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.770 [2024-07-13 07:20:51.095146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.770 [2024-07-13 07:20:51.098712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.770 [2024-07-13 07:20:51.107989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.770 [2024-07-13 07:20:51.108416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.770 [2024-07-13 07:20:51.108447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.770 [2024-07-13 07:20:51.108466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.770 [2024-07-13 07:20:51.108704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.770 [2024-07-13 07:20:51.108958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.770 [2024-07-13 07:20:51.108983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.770 [2024-07-13 07:20:51.109000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.770 [2024-07-13 07:20:51.112567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.770 [2024-07-13 07:20:51.121851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.770 [2024-07-13 07:20:51.122280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.770 [2024-07-13 07:20:51.122312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.770 [2024-07-13 07:20:51.122330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.770 [2024-07-13 07:20:51.122568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.770 [2024-07-13 07:20:51.122817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.770 [2024-07-13 07:20:51.122842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.770 [2024-07-13 07:20:51.122858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.770 [2024-07-13 07:20:51.126453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.770 [2024-07-13 07:20:51.135723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.770 [2024-07-13 07:20:51.136166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.770 [2024-07-13 07:20:51.136199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.770 [2024-07-13 07:20:51.136217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.770 [2024-07-13 07:20:51.136455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.770 [2024-07-13 07:20:51.136699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.770 [2024-07-13 07:20:51.136723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.770 [2024-07-13 07:20:51.136739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.770 [2024-07-13 07:20:51.140328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.770 [2024-07-13 07:20:51.149594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.770 [2024-07-13 07:20:51.150027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.770 [2024-07-13 07:20:51.150059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.770 [2024-07-13 07:20:51.150078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.770 [2024-07-13 07:20:51.150316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.770 [2024-07-13 07:20:51.150559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.770 [2024-07-13 07:20:51.150583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.770 [2024-07-13 07:20:51.150600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.770 [2024-07-13 07:20:51.154174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.770 [2024-07-13 07:20:51.163452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.770 [2024-07-13 07:20:51.163885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.770 [2024-07-13 07:20:51.163918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.770 [2024-07-13 07:20:51.163936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.770 [2024-07-13 07:20:51.164175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.770 [2024-07-13 07:20:51.164418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.770 [2024-07-13 07:20:51.164442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.770 [2024-07-13 07:20:51.164458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.770 [2024-07-13 07:20:51.168040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.770 [2024-07-13 07:20:51.177322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.770 [2024-07-13 07:20:51.177820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.770 [2024-07-13 07:20:51.177851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.770 [2024-07-13 07:20:51.177877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.770 [2024-07-13 07:20:51.178117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.770 [2024-07-13 07:20:51.178359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.770 [2024-07-13 07:20:51.178384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.770 [2024-07-13 07:20:51.178400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.770 [2024-07-13 07:20:51.181974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1667635 Killed "${NVMF_APP[@]}" "$@" 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.770 [2024-07-13 07:20:51.191244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.770 [2024-07-13 07:20:51.191738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.770 [2024-07-13 07:20:51.191770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.770 [2024-07-13 07:20:51.191788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.770 [2024-07-13 07:20:51.192035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.770 [2024-07-13 07:20:51.192278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.770 [2024-07-13 07:20:51.192302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.770 [2024-07-13 07:20:51.192318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1668592 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1668592 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1668592 ']' 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:21.770 07:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.770 [2024-07-13 07:20:51.195893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.770 [2024-07-13 07:20:51.205186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.770 [2024-07-13 07:20:51.205597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.770 [2024-07-13 07:20:51.205628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.770 [2024-07-13 07:20:51.205646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.770 [2024-07-13 07:20:51.205893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.770 [2024-07-13 07:20:51.206135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.770 [2024-07-13 07:20:51.206159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.770 [2024-07-13 07:20:51.206174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.770 [2024-07-13 07:20:51.209740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.770 [2024-07-13 07:20:51.219130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.770 [2024-07-13 07:20:51.219567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.770 [2024-07-13 07:20:51.219610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:21.770 [2024-07-13 07:20:51.219643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:21.771 [2024-07-13 07:20:51.219943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:21.771 [2024-07-13 07:20:51.220196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.771 [2024-07-13 07:20:51.220221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.771 [2024-07-13 07:20:51.220237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.029 [2024-07-13 07:20:51.224007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.029 [2024-07-13 07:20:51.233037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.029 [2024-07-13 07:20:51.233458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-13 07:20:51.233492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.029 [2024-07-13 07:20:51.233511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.029 [2024-07-13 07:20:51.233751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.029 [2024-07-13 07:20:51.234005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.029 [2024-07-13 07:20:51.234030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.029 [2024-07-13 07:20:51.234046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.029 [2024-07-13 07:20:51.237620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.029 [2024-07-13 07:20:51.245443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:22.029 [2024-07-13 07:20:51.245514] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.029 [2024-07-13 07:20:51.246908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.029 [2024-07-13 07:20:51.247340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.029 [2024-07-13 07:20:51.247372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.029 [2024-07-13 07:20:51.247390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.247629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.247879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.247904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.247920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.030 [2024-07-13 07:20:51.251483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.030 [2024-07-13 07:20:51.260948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.030 [2024-07-13 07:20:51.261393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-13 07:20:51.261425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.030 [2024-07-13 07:20:51.261443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.261681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.261932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.261957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.261972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.030 [2024-07-13 07:20:51.265541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.030 [2024-07-13 07:20:51.274841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.030 [2024-07-13 07:20:51.275284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-13 07:20:51.275317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.030 [2024-07-13 07:20:51.275335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.275573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.275815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.275839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.275854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.030 [2024-07-13 07:20:51.279438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.030 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.030 [2024-07-13 07:20:51.288716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.030 [2024-07-13 07:20:51.289133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-13 07:20:51.289165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.030 [2024-07-13 07:20:51.289184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.289428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.289671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.289695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.289710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.030 [2024-07-13 07:20:51.293291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.030 [2024-07-13 07:20:51.295911] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:22.030 [2024-07-13 07:20:51.302553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.030 [2024-07-13 07:20:51.302994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-13 07:20:51.303026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.030 [2024-07-13 07:20:51.303045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.303282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.303525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.303549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.303564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.030 [2024-07-13 07:20:51.307136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.030 [2024-07-13 07:20:51.316197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.030 [2024-07-13 07:20:51.316639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-13 07:20:51.316667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.030 [2024-07-13 07:20:51.316684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.316942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.317161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.317182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.317196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.030 [2024-07-13 07:20:51.320344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.030 [2024-07-13 07:20:51.326587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:22.030 [2024-07-13 07:20:51.330105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.030 [2024-07-13 07:20:51.330548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-13 07:20:51.330580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.030 [2024-07-13 07:20:51.330599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.330840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.331095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.331118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.331133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.030 [2024-07-13 07:20:51.334738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.030 [2024-07-13 07:20:51.343930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.030 [2024-07-13 07:20:51.344467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-13 07:20:51.344508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.030 [2024-07-13 07:20:51.344529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.344775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.345032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.345056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.345072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.030 [2024-07-13 07:20:51.348589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.030 [2024-07-13 07:20:51.357943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.030 [2024-07-13 07:20:51.358357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-13 07:20:51.358385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.030 [2024-07-13 07:20:51.358401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.358641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.358894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.358934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.358948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.030 [2024-07-13 07:20:51.362505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.030 [2024-07-13 07:20:51.371855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.030 [2024-07-13 07:20:51.372351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-13 07:20:51.372384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.030 [2024-07-13 07:20:51.372404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.372644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.372912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.372936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.372950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.030 [2024-07-13 07:20:51.376538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.030 [2024-07-13 07:20:51.385949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.030 [2024-07-13 07:20:51.386541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.030 [2024-07-13 07:20:51.386580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.030 [2024-07-13 07:20:51.386601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.030 [2024-07-13 07:20:51.386847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.030 [2024-07-13 07:20:51.387099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.030 [2024-07-13 07:20:51.387122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.030 [2024-07-13 07:20:51.387139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.031 [2024-07-13 07:20:51.390753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.031 [2024-07-13 07:20:51.399948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.031 [2024-07-13 07:20:51.400408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-13 07:20:51.400435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.031 [2024-07-13 07:20:51.400451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.031 [2024-07-13 07:20:51.400727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.031 [2024-07-13 07:20:51.400990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.031 [2024-07-13 07:20:51.401013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.031 [2024-07-13 07:20:51.401027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.031 [2024-07-13 07:20:51.404535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.031 [2024-07-13 07:20:51.413880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.031 [2024-07-13 07:20:51.414360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-13 07:20:51.414391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.031 [2024-07-13 07:20:51.414410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.031 [2024-07-13 07:20:51.414648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.031 [2024-07-13 07:20:51.414900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.031 [2024-07-13 07:20:51.414940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.031 [2024-07-13 07:20:51.414956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.031 [2024-07-13 07:20:51.418477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.031 [2024-07-13 07:20:51.419945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.031 [2024-07-13 07:20:51.419976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.031 [2024-07-13 07:20:51.420012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:22.031 [2024-07-13 07:20:51.420024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:22.031 [2024-07-13 07:20:51.420034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.031 [2024-07-13 07:20:51.420090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:22.031 [2024-07-13 07:20:51.420154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:22.031 [2024-07-13 07:20:51.420157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.031 [2024-07-13 07:20:51.427387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.031 [2024-07-13 07:20:51.427877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-13 07:20:51.427926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.031 [2024-07-13 07:20:51.427945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.031 [2024-07-13 07:20:51.428167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.031 [2024-07-13 07:20:51.428398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.031 [2024-07-13 07:20:51.428419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.031 [2024-07-13 07:20:51.428434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.031 [2024-07-13 07:20:51.431692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.031 [2024-07-13 07:20:51.440989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.031 [2024-07-13 07:20:51.441538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-13 07:20:51.441575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.031 [2024-07-13 07:20:51.441593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.031 [2024-07-13 07:20:51.441832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.031 [2024-07-13 07:20:51.442084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.031 [2024-07-13 07:20:51.442107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.031 [2024-07-13 07:20:51.442124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.031 [2024-07-13 07:20:51.445398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.031 [2024-07-13 07:20:51.454610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.031 [2024-07-13 07:20:51.455192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-13 07:20:51.455231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.031 [2024-07-13 07:20:51.455251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.031 [2024-07-13 07:20:51.455487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.031 [2024-07-13 07:20:51.455701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.031 [2024-07-13 07:20:51.455722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.031 [2024-07-13 07:20:51.455738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.031 [2024-07-13 07:20:51.458991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.031 [2024-07-13 07:20:51.468192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.031 [2024-07-13 07:20:51.468751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-13 07:20:51.468790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.031 [2024-07-13 07:20:51.468809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.031 [2024-07-13 07:20:51.469057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.031 [2024-07-13 07:20:51.469273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.031 [2024-07-13 07:20:51.469294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.031 [2024-07-13 07:20:51.469309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.031 [2024-07-13 07:20:51.472508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.031 [2024-07-13 07:20:51.482066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.031 [2024-07-13 07:20:51.482555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.031 [2024-07-13 07:20:51.482594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.031 [2024-07-13 07:20:51.482628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.031 [2024-07-13 07:20:51.482925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.031 [2024-07-13 07:20:51.483149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.031 [2024-07-13 07:20:51.483171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.031 [2024-07-13 07:20:51.483188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.290 [2024-07-13 07:20:51.486521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.495642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.496173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.496211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.496230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.496467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.496683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.291 [2024-07-13 07:20:51.496704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.291 [2024-07-13 07:20:51.496719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.291 [2024-07-13 07:20:51.499900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.509267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.509681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.509710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.509735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.509972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.510184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.291 [2024-07-13 07:20:51.510205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.291 [2024-07-13 07:20:51.510218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.291 [2024-07-13 07:20:51.513477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.522907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.523312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.523340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.523356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.523570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.523796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.291 [2024-07-13 07:20:51.523816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.291 [2024-07-13 07:20:51.523830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.291 [2024-07-13 07:20:51.527051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.536412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.536815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.536843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.536860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.537097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.537308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.291 [2024-07-13 07:20:51.537329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.291 [2024-07-13 07:20:51.537342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.291 [2024-07-13 07:20:51.540538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.549929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.550294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.550323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.550339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.550568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.550784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.291 [2024-07-13 07:20:51.550804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.291 [2024-07-13 07:20:51.550817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.291 [2024-07-13 07:20:51.554000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.563387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.563806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.563835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.563851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.564071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.564302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.291 [2024-07-13 07:20:51.564323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.291 [2024-07-13 07:20:51.564336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.291 [2024-07-13 07:20:51.567490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.576835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.577247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.577275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.577291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.577505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.577731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.291 [2024-07-13 07:20:51.577751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.291 [2024-07-13 07:20:51.577764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.291 [2024-07-13 07:20:51.580961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.590362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.590723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.590751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.590768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.591006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.591218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.291 [2024-07-13 07:20:51.591238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.291 [2024-07-13 07:20:51.591251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.291 [2024-07-13 07:20:51.594443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.603816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.604211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.604240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.604256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.604470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.604696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.291 [2024-07-13 07:20:51.604716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.291 [2024-07-13 07:20:51.604729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.291 [2024-07-13 07:20:51.607931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.617359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.617760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.617789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.617805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.618027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.618259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.291 [2024-07-13 07:20:51.618280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.291 [2024-07-13 07:20:51.618293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.291 [2024-07-13 07:20:51.621504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.291 [2024-07-13 07:20:51.630917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.291 [2024-07-13 07:20:51.631298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.291 [2024-07-13 07:20:51.631326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.291 [2024-07-13 07:20:51.631342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.291 [2024-07-13 07:20:51.631556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.291 [2024-07-13 07:20:51.631782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.292 [2024-07-13 07:20:51.631802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.292 [2024-07-13 07:20:51.631815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.292 [2024-07-13 07:20:51.635039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.292 [2024-07-13 07:20:51.644402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.292 [2024-07-13 07:20:51.644784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.292 [2024-07-13 07:20:51.644812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.292 [2024-07-13 07:20:51.644833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.292 [2024-07-13 07:20:51.645055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.292 [2024-07-13 07:20:51.645286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.292 [2024-07-13 07:20:51.645307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.292 [2024-07-13 07:20:51.645320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.292 [2024-07-13 07:20:51.648512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.292 [2024-07-13 07:20:51.657862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.292 [2024-07-13 07:20:51.658230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.292 [2024-07-13 07:20:51.658258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.292 [2024-07-13 07:20:51.658274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.292 [2024-07-13 07:20:51.658503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.292 [2024-07-13 07:20:51.658714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.292 [2024-07-13 07:20:51.658734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.292 [2024-07-13 07:20:51.658748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.292 [2024-07-13 07:20:51.661889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.292 [2024-07-13 07:20:51.671388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.292 [2024-07-13 07:20:51.671786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.292 [2024-07-13 07:20:51.671814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.292 [2024-07-13 07:20:51.671829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.292 [2024-07-13 07:20:51.672050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.292 [2024-07-13 07:20:51.672281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.292 [2024-07-13 07:20:51.672302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.292 [2024-07-13 07:20:51.672315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.292 [2024-07-13 07:20:51.675471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.292 [2024-07-13 07:20:51.684965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.292 [2024-07-13 07:20:51.685307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.292 [2024-07-13 07:20:51.685336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.292 [2024-07-13 07:20:51.685352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.292 [2024-07-13 07:20:51.685581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.292 [2024-07-13 07:20:51.685791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.292 [2024-07-13 07:20:51.685817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.292 [2024-07-13 07:20:51.685830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.292 [2024-07-13 07:20:51.689053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.292 [2024-07-13 07:20:51.698393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.292 [2024-07-13 07:20:51.698768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.292 [2024-07-13 07:20:51.698796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.292 [2024-07-13 07:20:51.698812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.292 [2024-07-13 07:20:51.699034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.292 [2024-07-13 07:20:51.699265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.292 [2024-07-13 07:20:51.699286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.292 [2024-07-13 07:20:51.699300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.292 [2024-07-13 07:20:51.702491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.292 [2024-07-13 07:20:51.711906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.292 [2024-07-13 07:20:51.712364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.292 [2024-07-13 07:20:51.712391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.292 [2024-07-13 07:20:51.712407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.292 [2024-07-13 07:20:51.712621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.292 [2024-07-13 07:20:51.712864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.292 [2024-07-13 07:20:51.712893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.292 [2024-07-13 07:20:51.712908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.292 [2024-07-13 07:20:51.716087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.292 [2024-07-13 07:20:51.725464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.292 [2024-07-13 07:20:51.725850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.292 [2024-07-13 07:20:51.725885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.292 [2024-07-13 07:20:51.725903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.292 [2024-07-13 07:20:51.726117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.292 [2024-07-13 07:20:51.726335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.292 [2024-07-13 07:20:51.726356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.292 [2024-07-13 07:20:51.726370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.292 [2024-07-13 07:20:51.729576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.292 [2024-07-13 07:20:51.738981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.292 [2024-07-13 07:20:51.739370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.292 [2024-07-13 07:20:51.739398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.292 [2024-07-13 07:20:51.739414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.292 [2024-07-13 07:20:51.739660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.292 [2024-07-13 07:20:51.739963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.292 [2024-07-13 07:20:51.739995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.292 [2024-07-13 07:20:51.740009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.292 [2024-07-13 07:20:51.743452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.551 [2024-07-13 07:20:51.752537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.551 [2024-07-13 07:20:51.752928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.551 [2024-07-13 07:20:51.752958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.551 [2024-07-13 07:20:51.752975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.551 [2024-07-13 07:20:51.753205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.551 [2024-07-13 07:20:51.753416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.551 [2024-07-13 07:20:51.753437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.551 [2024-07-13 07:20:51.753450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.551 [2024-07-13 07:20:51.756483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.551 [2024-07-13 07:20:51.766068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.551 [2024-07-13 07:20:51.766420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.551 [2024-07-13 07:20:51.766449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.551 [2024-07-13 07:20:51.766465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.551 [2024-07-13 07:20:51.766694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.551 [2024-07-13 07:20:51.766915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.551 [2024-07-13 07:20:51.766936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.551 [2024-07-13 07:20:51.766949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.551 [2024-07-13 07:20:51.770130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.551 [2024-07-13 07:20:51.779516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.551 [2024-07-13 07:20:51.779919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.551 [2024-07-13 07:20:51.779948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.551 [2024-07-13 07:20:51.779964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.551 [2024-07-13 07:20:51.780198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.551 [2024-07-13 07:20:51.780411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.551 [2024-07-13 07:20:51.780431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.551 [2024-07-13 07:20:51.780444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.551 [2024-07-13 07:20:51.783640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.551 [2024-07-13 07:20:51.793003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.793394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.793422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.793438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.793667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.793887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.793908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.793921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.797121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.806503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.806884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.806913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.806929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.807143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.807369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.807390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.807404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.810597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.819983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.820365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.820393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.820410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.820639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.820874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.820895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.820914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.824076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.833458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.833859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.833894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.833911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.834140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.834351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.834373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.834386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.837584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.846954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.847329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.847357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.847373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.847602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.847813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.847833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.847846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.851027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.860374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.860756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.860783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.860799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.861022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.861253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.861274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.861287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.864438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.873840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.874235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.874268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.874285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.874513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.874725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.874745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.874758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.877947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.887366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.887746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.887775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.887792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.888016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.888249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.888269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.888282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.891517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.900890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.901286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.901314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.901330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.901544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.901762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.901783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.901796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.905229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.914473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.914864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.914898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.914914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.915142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.915359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.915380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.915393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.918589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.927945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.552 [2024-07-13 07:20:51.928332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.552 [2024-07-13 07:20:51.928360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.552 [2024-07-13 07:20:51.928376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.552 [2024-07-13 07:20:51.928605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.552 [2024-07-13 07:20:51.928816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.552 [2024-07-13 07:20:51.928837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.552 [2024-07-13 07:20:51.928850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.552 [2024-07-13 07:20:51.932065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.552 [2024-07-13 07:20:51.941542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.553 [2024-07-13 07:20:51.941953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.553 [2024-07-13 07:20:51.941982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.553 [2024-07-13 07:20:51.941998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.553 [2024-07-13 07:20:51.942212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.553 [2024-07-13 07:20:51.942440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.553 [2024-07-13 07:20:51.942461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.553 [2024-07-13 07:20:51.942475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.553 [2024-07-13 07:20:51.945722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.553 [2024-07-13 07:20:51.955056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.553 [2024-07-13 07:20:51.955420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.553 [2024-07-13 07:20:51.955448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.553 [2024-07-13 07:20:51.955464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.553 [2024-07-13 07:20:51.955678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.553 [2024-07-13 07:20:51.955914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.553 [2024-07-13 07:20:51.955935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.553 [2024-07-13 07:20:51.955949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.553 [2024-07-13 07:20:51.959137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.553 [2024-07-13 07:20:51.968507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.553 [2024-07-13 07:20:51.968917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.553 [2024-07-13 07:20:51.968947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.553 [2024-07-13 07:20:51.968963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.553 [2024-07-13 07:20:51.969177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.553 [2024-07-13 07:20:51.969405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.553 [2024-07-13 07:20:51.969425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.553 [2024-07-13 07:20:51.969439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.553 [2024-07-13 07:20:51.972599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.553 [2024-07-13 07:20:51.982058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.553 [2024-07-13 07:20:51.982441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.553 [2024-07-13 07:20:51.982469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.553 [2024-07-13 07:20:51.982485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.553 [2024-07-13 07:20:51.982699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.553 [2024-07-13 07:20:51.982926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.553 [2024-07-13 07:20:51.982949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.553 [2024-07-13 07:20:51.982962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.553 [2024-07-13 07:20:51.986172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.553 [2024-07-13 07:20:51.995489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.553 [2024-07-13 07:20:51.995873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.553 [2024-07-13 07:20:51.995901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.553 [2024-07-13 07:20:51.995918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.553 [2024-07-13 07:20:51.996131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.553 [2024-07-13 07:20:51.996358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.553 [2024-07-13 07:20:51.996379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.553 [2024-07-13 07:20:51.996393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.553 [2024-07-13 07:20:51.999555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.812 [2024-07-13 07:20:52.009100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.812 [2024-07-13 07:20:52.009569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.812 [2024-07-13 07:20:52.009603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.812 [2024-07-13 07:20:52.009639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.812 [2024-07-13 07:20:52.009916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.812 [2024-07-13 07:20:52.010168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.812 [2024-07-13 07:20:52.010189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.812 [2024-07-13 07:20:52.010204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.812 [2024-07-13 07:20:52.013498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.812 [2024-07-13 07:20:52.022667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.812 [2024-07-13 07:20:52.023046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.812 [2024-07-13 07:20:52.023077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.812 [2024-07-13 07:20:52.023093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.812 [2024-07-13 07:20:52.023323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.812 [2024-07-13 07:20:52.023535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.812 [2024-07-13 07:20:52.023555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.812 [2024-07-13 07:20:52.023569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.812 [2024-07-13 07:20:52.026735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.812 [2024-07-13 07:20:52.036102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.812 [2024-07-13 07:20:52.036505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.812 [2024-07-13 07:20:52.036534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.812 [2024-07-13 07:20:52.036551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.812 [2024-07-13 07:20:52.036764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.812 [2024-07-13 07:20:52.037001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.812 [2024-07-13 07:20:52.037022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.812 [2024-07-13 07:20:52.037036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.812 [2024-07-13 07:20:52.040221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.812 [2024-07-13 07:20:52.049610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.812 [2024-07-13 07:20:52.050004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.812 [2024-07-13 07:20:52.050033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.812 [2024-07-13 07:20:52.050049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.812 [2024-07-13 07:20:52.050263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.812 [2024-07-13 07:20:52.050490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.812 [2024-07-13 07:20:52.050516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.812 [2024-07-13 07:20:52.050530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.812 [2024-07-13 07:20:52.053684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.812 [2024-07-13 07:20:52.063234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.812 [2024-07-13 07:20:52.063599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.812 [2024-07-13 07:20:52.063627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.812 [2024-07-13 07:20:52.063643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.812 [2024-07-13 07:20:52.063856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.812 [2024-07-13 07:20:52.064084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.812 [2024-07-13 07:20:52.064106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.812 [2024-07-13 07:20:52.064119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.812 [2024-07-13 07:20:52.067294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.812 [2024-07-13 07:20:52.076688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.812 [2024-07-13 07:20:52.077078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.812 [2024-07-13 07:20:52.077106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.812 [2024-07-13 07:20:52.077122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.812 [2024-07-13 07:20:52.077350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.812 [2024-07-13 07:20:52.077561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.812 [2024-07-13 07:20:52.077582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.812 [2024-07-13 07:20:52.077595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.812 [2024-07-13 07:20:52.080798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.812 [2024-07-13 07:20:52.090171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.812 [2024-07-13 07:20:52.090556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.812 [2024-07-13 07:20:52.090584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.090601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.090815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.091071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.091093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.091107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 [2024-07-13 07:20:52.094277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.813 [2024-07-13 07:20:52.103694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.813 [2024-07-13 07:20:52.104079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.813 [2024-07-13 07:20:52.104108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.104124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.104352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.104565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.104586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.104600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 [2024-07-13 07:20:52.107760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.813 [2024-07-13 07:20:52.117144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.813 [2024-07-13 07:20:52.117542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.813 [2024-07-13 07:20:52.117570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.117586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.117815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.118055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.118078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.118092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 [2024-07-13 07:20:52.121347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.813 [2024-07-13 07:20:52.130536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.813 [2024-07-13 07:20:52.130921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.813 [2024-07-13 07:20:52.130950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.130967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.131195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.131407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.131428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.131441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 [2024-07-13 07:20:52.134597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.813 [2024-07-13 07:20:52.144006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.813 [2024-07-13 07:20:52.144340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.813 [2024-07-13 07:20:52.144383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.144400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.144647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.144859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.144888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.144902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 [2024-07-13 07:20:52.148067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.813 [2024-07-13 07:20:52.157420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.813 [2024-07-13 07:20:52.157824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.813 [2024-07-13 07:20:52.157852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.157875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.158091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.158319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.158340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.158354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 [2024-07-13 07:20:52.161552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.813 [2024-07-13 07:20:52.170945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.813 [2024-07-13 07:20:52.171321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.813 [2024-07-13 07:20:52.171349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.171366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.171595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.171806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.171827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.171841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 [2024-07-13 07:20:52.175052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.813 [2024-07-13 07:20:52.184456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.813 [2024-07-13 07:20:52.184842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.813 [2024-07-13 07:20:52.184877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.184895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.185109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.185338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.185359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.185378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 [2024-07-13 07:20:52.188535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.813 [2024-07-13 07:20:52.197942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.813 [2024-07-13 07:20:52.198304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.813 [2024-07-13 07:20:52.198332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.198348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.198562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.198790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.198811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.198824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 [2024-07-13 07:20:52.202048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.813 [2024-07-13 07:20:52.211459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.813 [2024-07-13 07:20:52.211837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.813 [2024-07-13 07:20:52.211873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.211891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.212106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.212324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.212345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.212358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 [2024-07-13 07:20:52.215590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.813 [2024-07-13 07:20:52.224985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.813 [2024-07-13 07:20:52.225329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.813 [2024-07-13 07:20:52.225357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.813 [2024-07-13 07:20:52.225373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.813 [2024-07-13 07:20:52.225587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.813 [2024-07-13 07:20:52.225805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.813 [2024-07-13 07:20:52.225827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.813 [2024-07-13 07:20:52.225841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.813 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:22.813 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:22.813 07:20:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:22.813 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:22.814 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:22.814 [2024-07-13 07:20:52.229132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.814 [2024-07-13 07:20:52.238464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.814 [2024-07-13 07:20:52.238836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.814 [2024-07-13 07:20:52.238872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.814 [2024-07-13 07:20:52.238889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.814 [2024-07-13 07:20:52.239104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.814 [2024-07-13 07:20:52.239321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.814 [2024-07-13 07:20:52.239343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.814 [2024-07-13 07:20:52.239357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.814 [2024-07-13 07:20:52.242570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.814 07:20:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.814 [2024-07-13 07:20:52.252053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.814 07:20:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:22.814 [2024-07-13 07:20:52.252440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.814 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.814 [2024-07-13 07:20:52.252469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:22.814 [2024-07-13 07:20:52.252492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:22.814 [2024-07-13 07:20:52.252720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:22.814 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:22.814 [2024-07-13 07:20:52.252941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.814 [2024-07-13 07:20:52.252963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.814 [2024-07-13 07:20:52.252977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.814 [2024-07-13 07:20:52.256139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.814 [2024-07-13 07:20:52.258729] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.814 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.814 07:20:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:22.814 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.814 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.073 [2024-07-13 07:20:52.265860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.073 [2024-07-13 07:20:52.266305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-13 07:20:52.266336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:23.073 [2024-07-13 07:20:52.266353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:23.073 [2024-07-13 07:20:52.266586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:23.073 [2024-07-13 07:20:52.266807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.073 [2024-07-13 07:20:52.266828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.073 [2024-07-13 07:20:52.266842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.073 [2024-07-13 07:20:52.270126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.073 [2024-07-13 07:20:52.279550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.073 [2024-07-13 07:20:52.279981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-13 07:20:52.280015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:23.073 [2024-07-13 07:20:52.280032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:23.073 [2024-07-13 07:20:52.280261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:23.073 [2024-07-13 07:20:52.280482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.073 [2024-07-13 07:20:52.280502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.073 [2024-07-13 07:20:52.280515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.073 [2024-07-13 07:20:52.283732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.073 [2024-07-13 07:20:52.293140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.073 [2024-07-13 07:20:52.293767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-13 07:20:52.293807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:23.073 [2024-07-13 07:20:52.293827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:23.073 [2024-07-13 07:20:52.294061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:23.073 [2024-07-13 07:20:52.294294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.073 [2024-07-13 07:20:52.294316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.073 [2024-07-13 07:20:52.294332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.073 [2024-07-13 07:20:52.297533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.073 Malloc0 00:33:23.073 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.073 07:20:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:23.073 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.073 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.073 [2024-07-13 07:20:52.306917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.073 [2024-07-13 07:20:52.307474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.073 [2024-07-13 07:20:52.307503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:23.074 [2024-07-13 07:20:52.307521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:23.074 [2024-07-13 07:20:52.307766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:23.074 [2024-07-13 07:20:52.308008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.074 [2024-07-13 07:20:52.308031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.074 [2024-07-13 07:20:52.308047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.074 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.074 07:20:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:23.074 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.074 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.074 [2024-07-13 07:20:52.311322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.074 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.074 07:20:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.074 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.074 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.074 [2024-07-13 07:20:52.320581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.074 [2024-07-13 07:20:52.321053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.074 [2024-07-13 07:20:52.321083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3b50 with addr=10.0.0.2, port=4420 00:33:23.074 [2024-07-13 07:20:52.321100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3b50 is same with the state(5) to be set 00:33:23.074 [2024-07-13 07:20:52.321329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3b50 (9): Bad file descriptor 00:33:23.074 [2024-07-13 07:20:52.321541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.074 [2024-07-13 07:20:52.321561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.074 [2024-07-13 07:20:52.321575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.074 [2024-07-13 07:20:52.322352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.074 [2024-07-13 07:20:52.324834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.074 07:20:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.074 07:20:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1667926 00:33:23.074 [2024-07-13 07:20:52.334117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.074 [2024-07-13 07:20:52.411612] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:33.039 00:33:33.039 Latency(us) 00:33:33.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.039 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:33.039 Verification LBA range: start 0x0 length 0x4000 00:33:33.039 Nvme1n1 : 15.01 6260.76 24.46 11004.27 0.00 7389.99 843.47 22427.88 00:33:33.039 =================================================================================================================== 00:33:33.039 Total : 6260.76 24.46 11004.27 0.00 7389.99 843.47 22427.88 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:33.039 rmmod nvme_tcp 00:33:33.039 rmmod nvme_fabrics 00:33:33.039 rmmod nvme_keyring 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1668592 ']' 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1668592 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1668592 ']' 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1668592 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1668592 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1668592' 00:33:33.039 killing process with pid 1668592 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1668592 00:33:33.039 07:21:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1668592 00:33:33.039 07:21:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:33.039 07:21:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:33.039 07:21:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:33.039 07:21:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:33.039 07:21:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:33.039 07:21:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.039 07:21:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:33.039 07:21:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.971 07:21:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:33.971 00:33:33.971 real 0m22.355s 00:33:33.971 user 1m0.805s 00:33:33.971 sys 0m4.000s 00:33:33.971 07:21:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:33.971 07:21:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.971 ************************************ 00:33:33.971 END TEST nvmf_bdevperf 00:33:33.971 ************************************ 00:33:33.971 07:21:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:33.971 07:21:03 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:33.971 07:21:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:33.971 07:21:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:33.971 07:21:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:33.971 ************************************ 00:33:33.971 START TEST nvmf_target_disconnect 00:33:33.971 ************************************ 00:33:33.971 07:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:33.971 * Looking for test storage... 00:33:33.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:33.971 07:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:33.971 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:33.971 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:33.971 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:33.971 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:33.971 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:33.971 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:33.971 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:33.972 07:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:36.496 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:36.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:36.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:36.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:36.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:36.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:33:36.497 00:33:36.497 --- 10.0.0.2 ping statistics --- 00:33:36.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.497 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:33:36.497 00:33:36.497 --- 10.0.0.1 ping statistics --- 00:33:36.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.497 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:36.497 ************************************ 00:33:36.497 START TEST nvmf_target_disconnect_tc1 00:33:36.497 ************************************ 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:36.497 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:36.498 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.498 [2024-07-13 07:21:05.616712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.498 [2024-07-13 07:21:05.616788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd53e0 with addr=10.0.0.2, port=4420 00:33:36.498 [2024-07-13 07:21:05.616822] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:36.498 [2024-07-13 07:21:05.616859] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:36.498 [2024-07-13 07:21:05.616882] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:36.498 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:36.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:36.498 Initializing NVMe Controllers 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:36.498 00:33:36.498 real 0m0.100s 00:33:36.498 user 0m0.040s 00:33:36.498 sys 0m0.060s 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:36.498 ************************************ 00:33:36.498 END TEST nvmf_target_disconnect_tc1 00:33:36.498 ************************************ 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:36.498 ************************************ 00:33:36.498 START TEST nvmf_target_disconnect_tc2 00:33:36.498 ************************************ 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1671846 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1671846 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1671846 ']' 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:36.498 07:21:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.498 [2024-07-13 07:21:05.724966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:36.498 [2024-07-13 07:21:05.725060] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.498 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.498 [2024-07-13 07:21:05.767700] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:36.498 [2024-07-13 07:21:05.793810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:36.498 [2024-07-13 07:21:05.885483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.498 [2024-07-13 07:21:05.885541] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.498 [2024-07-13 07:21:05.885569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.498 [2024-07-13 07:21:05.885580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.498 [2024-07-13 07:21:05.885590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.498 [2024-07-13 07:21:05.885898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:36.498 [2024-07-13 07:21:05.885991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:36.498 [2024-07-13 07:21:05.886096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:36.498 [2024-07-13 07:21:05.886100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.755 Malloc0 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.755 [2024-07-13 07:21:06.057760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.755 [2024-07-13 07:21:06.086026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1671873 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:36.755 07:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:36.755 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.676 07:21:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1671846 00:33:38.676 07:21:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 [2024-07-13 07:21:08.109965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Read completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.676 Write completed with error (sct=0, sc=8) 00:33:38.676 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 [2024-07-13 07:21:08.110248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:38.677 [2024-07-13 07:21:08.110476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.110505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.110640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.110667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.110822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.110848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.111015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.111041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.111164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.111191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.111367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.111392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.111520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.111546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.111738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.111767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.111929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.111957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.112243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.112285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.112572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.112601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.112791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.112820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.112975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.113003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.113129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.113155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.113332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.113357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.113513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.113539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.113699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.113741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.113931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.113958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.114078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.114104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.114291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.114317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.114441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.114467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 [2024-07-13 07:21:08.114639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.677 [2024-07-13 07:21:08.114665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.677 qpair failed and we were unable to recover it. 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Write completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 Read completed with error (sct=0, sc=8) 00:33:38.677 starting I/O failed 00:33:38.677 [2024-07-13 07:21:08.115017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.678 [2024-07-13 07:21:08.115147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.115198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.115357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.115384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.115511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.115541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.115692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.115718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.115877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.115903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.116029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.116053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.116198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.116224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.116394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.116419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.116607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.116635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.116762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.116789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.116952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.116980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.117094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.117121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.117251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.117277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.117408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.117433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.117591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.117617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.117759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.117797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.117968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.117996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.118130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.118157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.118378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.118425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.118564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.118590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.118740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.118766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.118942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.118971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.119126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.119152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.119331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.119357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.119478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.119504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.119730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.119756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.119907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.119934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.120086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.120119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.120277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.120302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.120453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.120479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.120665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.120690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.120835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.120861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.121000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.121025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.121148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.121174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.121322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.121348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.122007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.122033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.122150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.122176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.122348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.122374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.122521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.122548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.122696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.122723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.678 qpair failed and we were unable to recover it. 00:33:38.678 [2024-07-13 07:21:08.122876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.678 [2024-07-13 07:21:08.122903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.123061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.123087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.123234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.123261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.123385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.123410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.123531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.123557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.123728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.123757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.123918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.123944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.124075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.124100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.124279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.124304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.124417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.124442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.124567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.124593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.124716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.124743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.124869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.124896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.125021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.125048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.125207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.125240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.125410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.125436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.125551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.125577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.125726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.125752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.125921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.125947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.126069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.126097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.126245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.126270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.126464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.126490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.126643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.126669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.126843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.126884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.127024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.127050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.127171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.127199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.127373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.127399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.127551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.127581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.127709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.127736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.127883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.127911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.128066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.128092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.128260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.128289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.128508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.128533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.128680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.128705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.128824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.128874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.129053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.129079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.679 [2024-07-13 07:21:08.129204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.679 [2024-07-13 07:21:08.129230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.679 qpair failed and we were unable to recover it. 00:33:38.957 [2024-07-13 07:21:08.129353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.957 [2024-07-13 07:21:08.129380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.957 qpair failed and we were unable to recover it. 00:33:38.957 [2024-07-13 07:21:08.129528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.957 [2024-07-13 07:21:08.129566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.957 qpair failed and we were unable to recover it. 00:33:38.957 [2024-07-13 07:21:08.129795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.957 [2024-07-13 07:21:08.129822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.957 qpair failed and we were unable to recover it. 00:33:38.957 [2024-07-13 07:21:08.129986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.130012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.130146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.130171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.130321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.130347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.130499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.130524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.130673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.130701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.130878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.130905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.131028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.131055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.131187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.131213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.131337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.131363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.131511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.131537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.131653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.131679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.131802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.131829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.131983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.132010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.132164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.132189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.132363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.132388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.132541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.132567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.132716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.132744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.132898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.132925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.133056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.133084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.133253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.133278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.133404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.133430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.133558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.133584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.133734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.133765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.133947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.133974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.134145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.134170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.134344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.134370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.134519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.134546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.134693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.134740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.134942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.134972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.135115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.135140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.135318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.135343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.135461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.135488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.135648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.135673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.135812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.135841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.136021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.136049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.136169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.136196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.136368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.136393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.136532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.136559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.136742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.136767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.136926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.958 [2024-07-13 07:21:08.136953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.958 qpair failed and we were unable to recover it. 00:33:38.958 [2024-07-13 07:21:08.137143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.137169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.137326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.137352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.137497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.137522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.137654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.137680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.137809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.137834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.138013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.138040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.138167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.138194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.138342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.138368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.138521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.138546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.138668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.138693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.138822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.138847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.139006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.139031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.139177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.139202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.139345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.139370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.139490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.139515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.139706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.139731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.139923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.139949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.140134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.140159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.140330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.140355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.140482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.140508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.140680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.140705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.140880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.140906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.141027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.141052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.141205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.141230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.141380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.141405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.141526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.141551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.141784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.141812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.141979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.142012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.142157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.142182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.142304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.142349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.142492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.142520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.142692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.142717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.142835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.142861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.143046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.143071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.143296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.143321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.143499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.143524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.143673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.143698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.143829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.143855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.143986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.144012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.144236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.144262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.959 qpair failed and we were unable to recover it. 00:33:38.959 [2024-07-13 07:21:08.144411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.959 [2024-07-13 07:21:08.144436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.144603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.144632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.144766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.144796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.144988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.145014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.145138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.145164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.145313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.145339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.145520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.145546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.145668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.145693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.145874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.145900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.146045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.146070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.146237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.146265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.146494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.146545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.146713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.146738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.146888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.146913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.147102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.147141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.147299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.147327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.147520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.147550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.147695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.147725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.147936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.147978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.148113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.148140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.148258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.148284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.148431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.148457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.148589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.148614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.148762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.148787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.148979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.149005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.149155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.149180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.149399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.149446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.149648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.149679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.149877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.149905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.150069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.150095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.150243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.150268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.150411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.150453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.150657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.150711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.150880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.150907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.151074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.151103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.151272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.151298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.151446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.151472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.151640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.151669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.151887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.151914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.152062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.152088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.152254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.960 [2024-07-13 07:21:08.152283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.960 qpair failed and we were unable to recover it. 00:33:38.960 [2024-07-13 07:21:08.152473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.152519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.152692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.152718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.152893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.152919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.153095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.153123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.153281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.153307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.153456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.153482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.153681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.153709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.153893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.153920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.154085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.154114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.154234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.154263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.154409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.154435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.154555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.154581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.154760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.154789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.154973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.155000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.155160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.155204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.155370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.155401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.155573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.155598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.155770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.155798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.155986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.156014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.156181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.156206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.156368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.156396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.156578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.156627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.156824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.156849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.157029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.157058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.157279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.157328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.157526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.157552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.157729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.157762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.157928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.157955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.158106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.158132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.158294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.158323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.158631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.158684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.158850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.158887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.159064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.159092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.159334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.159362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.159559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.159584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.159722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.159750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.159911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.159937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.160082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.160107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.160227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.160252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.160476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.160517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.961 [2024-07-13 07:21:08.160659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.961 [2024-07-13 07:21:08.160685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.961 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.160858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.160905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.161071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.161099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.161252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.161277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.161402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.161443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.161577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.161605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.161742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.161767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.161922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.161964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.162107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.162135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.162362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.162387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.162551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.162579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.162716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.162746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.162890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.162916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.163067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.163110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.163394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.163451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.163624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.163649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.163882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.163911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.164108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.164133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.164302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.164327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.164465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.164494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.164628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.164656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.164854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.164884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.165045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.165073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.165240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.165289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.165462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.165487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.165718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.165745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.165905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.165938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.166082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.166108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.962 qpair failed and we were unable to recover it. 00:33:38.962 [2024-07-13 07:21:08.166231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.962 [2024-07-13 07:21:08.166258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.166480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.166531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.166665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.166693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.166859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.166913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.167054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.167080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.167229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.167254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.167395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.167423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.167587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.167615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.167789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.167814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.167967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.167993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.168116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.168142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.168364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.168389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.168565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.168593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.168735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.168762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.168931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.168957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.169119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.169147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.169285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.169313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.169453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.169479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.169656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.169699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.169890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.169919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.170058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.170085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.170258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.170298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.170542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.170595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.170772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.170797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.170922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.170965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.171123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.171167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.171318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.171347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.171542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.171572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.171728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.171756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.171924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.171951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.172143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.172171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.172330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.172359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.172504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.172530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.172717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.172746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.172886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.172917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.173109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.173135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.173333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.173362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.173665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.173716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.173892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.173923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.174120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.174149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.174323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.963 [2024-07-13 07:21:08.174353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.963 qpair failed and we were unable to recover it. 00:33:38.963 [2024-07-13 07:21:08.174553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.174579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.174720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.174749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.174941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.174970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.175134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.175160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.175326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.175355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.175511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.175540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.175732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.175761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.175968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.175995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.176135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.176178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.176345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.176372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.176494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.176520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.176676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.176702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.176851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.176884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.177057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.177083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.177256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.177284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.177455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.177481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.177626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.177653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.177830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.177861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.178063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.178089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.178205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.178249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.178406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.178433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.178622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.178647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.178798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.178823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.178979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.179025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.179182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.179212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.179362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.179387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.179524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.179549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.179699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.179724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.179864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.179908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.180067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.180095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.180292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.180317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.180510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.180538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.180676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.180706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.180884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.180910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.181033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.181058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.181193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.181222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.181390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.181415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.181567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.181592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.181749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.181775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.181921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.181948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.964 [2024-07-13 07:21:08.182066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.964 [2024-07-13 07:21:08.182091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.964 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.182239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.182264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.182413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.182438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.182583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.182609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.182752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.182795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.182973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.182999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.183122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.183165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.183350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.183376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.183534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.183560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.183681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.183706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.183825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.183850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.183973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.183999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.184148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.184173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.184378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.184406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.184577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.184603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.184720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.184762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.184924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.184955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.185122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.185146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.185309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.185337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.185563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.185614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.185786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.185812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.185933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.185977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.186114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.186143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.186295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.186320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.186485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.186516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.186690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.186718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.186852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.186882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.187032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.187074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.187256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.187315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.187482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.187507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.187694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.187722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.187893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.187922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.188061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.188087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.188232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.188257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.188542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.188592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.188733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.188759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.188909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.188936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.189088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.189132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.189311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.189336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.189484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.189510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.189658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.189700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.965 qpair failed and we were unable to recover it. 00:33:38.965 [2024-07-13 07:21:08.189854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.965 [2024-07-13 07:21:08.189888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.190034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.190061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.190236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.190278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.190414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.190439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.190588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.190614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.190786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.190814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.190957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.190983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.191115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.191140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.191332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.191357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.191479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.191504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.191676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.191719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.191878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.191906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.192138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.192164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.192373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.192400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.192541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.192569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.192711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.192736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.192886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.192912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.193041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.193082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.193314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.193339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.193535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.193563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.193798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.193826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.193998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.194023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.194189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.194217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.194407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.194440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.194604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.194629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.194769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.194813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.194959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.194984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.195107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.195132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.195317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.195345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.195504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.195532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.195727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.195752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.195898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.195942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.196141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.196166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.196337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.196362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.966 qpair failed and we were unable to recover it. 00:33:38.966 [2024-07-13 07:21:08.196493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.966 [2024-07-13 07:21:08.196521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.196710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.196738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.196899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.196942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.197094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.197120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.197327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.197355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.197501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.197526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.197699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.197739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.197878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.197906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.198135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.198159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.198356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.198384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.198538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.198565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.198733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.198758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.198911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.198936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.199056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.199081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.199264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.199289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.199436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.199478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.199645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.199674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.199841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.199871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.200066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.200094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.200327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.200355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.200514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.200539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.200664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.200689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.200837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.200862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.201039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.201065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.201213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.201255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.201402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.201427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.201550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.201575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.201719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.201744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.201927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.201953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.202098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.202127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.202280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.202322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.202499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.202524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.202699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.202724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.202891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.202920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.203080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.203105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.203257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.203282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.203439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.203467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.203603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.203632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.203824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.203849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.204020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.204048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.204236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.967 [2024-07-13 07:21:08.204263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.967 qpair failed and we were unable to recover it. 00:33:38.967 [2024-07-13 07:21:08.204408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.204433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.204620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.204648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.204773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.204801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.204981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.205007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.205153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.205178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.205326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.205368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.205534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.205558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.205752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.205779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.205899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.205928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.206101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.206126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.206321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.206349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.206517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.206545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.206716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.206741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.206970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.206999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.207158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.207186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.207352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.207377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.207543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.207571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.207733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.207760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.207966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.207992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.208156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.208184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.208374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.208401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.208565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.208590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.208755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.208783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.208923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.208951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.209099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.209124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.209275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.209318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.209483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.209511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.209671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.209696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.209887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.209919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.210075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.210103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.210240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.210266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.210414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.210440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.210586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.210611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.210788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.210816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.210988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.211013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.211159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.211185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.211305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.211331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.211484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.211510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.211692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.211719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.211893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.211918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.968 qpair failed and we were unable to recover it. 00:33:38.968 [2024-07-13 07:21:08.212043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.968 [2024-07-13 07:21:08.212068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.212237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.212265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.212439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.212465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.212628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.212656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.212814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.212843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.213026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.213051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.213222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.213250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.213406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.213434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.213603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.213628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.213798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.213828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.213975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.214004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.214147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.214173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.214316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.214358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.214524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.214552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.214718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.214743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.214871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.214917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.215109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.215137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.215277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.215303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.215464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.215508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.215691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.215717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.215868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.215894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.216020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.216045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.216191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.216217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.216368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.216393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.216543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.216568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.216729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.216757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.216951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.216977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.217142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.217171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.217337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.217370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.217502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.217527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.217645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.217670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.217844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.217878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.218023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.218048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.218191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.218216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.218355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.218384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.218557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.218582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.218726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.218751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.218926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.218955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.219119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.219144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.219280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.219323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.219483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.969 [2024-07-13 07:21:08.219511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.969 qpair failed and we were unable to recover it. 00:33:38.969 [2024-07-13 07:21:08.219649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.219674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.219873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.219902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.220066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.220094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.220230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.220256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.220402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.220428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.220580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.220605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.220747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.220772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.220941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.220969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.221131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.221159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.221319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.221345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.221496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.221521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.221648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.221673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.221834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.221861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.222064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.222092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.222287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.222312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.222460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.222485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.222651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.222681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.222842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.222889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.223072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.223097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.223206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.223231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.223379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.223403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.223610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.223635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.223762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.223804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.223976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.224002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.224151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.224176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.224323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.224350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.224469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.224494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.224611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.224641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.224806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.224834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.224997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.225026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.225198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.225223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.225347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.225372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.225521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.225546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.225749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.225773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.225966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.225995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.226153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.970 [2024-07-13 07:21:08.226181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.970 qpair failed and we were unable to recover it. 00:33:38.970 [2024-07-13 07:21:08.226351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.226377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.226567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.226595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.226752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.226780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.226926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.226952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.227102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.227128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.227282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.227325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.227473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.227499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.227676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.227719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.227889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.227917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.228055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.228080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.228232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.228274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.228404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.228431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.228605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.228630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.228780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.228805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.228951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.228976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.229097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.229123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.229264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.229307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.229464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.229492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.229658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.229686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.229875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.229921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.230037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.230064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.230180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.230205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.230349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.230375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.230513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.230541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.230717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.230742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.230891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.230917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.231058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.231086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.231252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.231277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.231465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.231493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.231686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.231714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.231859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.231890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.232018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.232064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.232228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.232256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.232420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.232445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.232639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.232667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.232844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.232882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.233034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.233059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.233252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.233279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.233468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.971 [2024-07-13 07:21:08.233496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.971 qpair failed and we were unable to recover it. 00:33:38.971 [2024-07-13 07:21:08.233640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.233665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.233818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.233843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.234020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.234045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.234198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.234223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.234393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.234421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.234579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.234606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.234751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.234776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.234898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.234924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.235068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.235095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.235263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.235289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.235448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.235476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.235645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.235671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.235820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.235846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.235977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.236003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.236128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.236155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.236311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.236336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.236529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.236556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.236714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.236742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.236896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.236922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.237070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.237095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.237265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.237293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.237483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.237508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.237669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.237696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.237857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.237891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.238063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.238088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.238292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.238320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.238481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.238509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.238669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.238694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.238839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.238889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.239084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.239109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.239258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.239284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.239450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.239478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.239641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.239673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.239835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.239860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.239989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.240030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.240204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.240230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.240410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.240435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.240595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.240622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.240776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.240804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.240948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.240975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.972 [2024-07-13 07:21:08.241155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-13 07:21:08.241198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.972 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.241366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.241394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.241536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.241561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.241704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.241728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.241876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.241902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.242096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.242121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.242261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.242288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.242457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.242483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.242628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.242653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.242839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.242872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.243010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.243035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.243183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.243208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.243369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.243396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.243557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.243586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.243742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.243767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.243907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.243933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.244074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.244101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.244233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.244257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.244427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.244452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.244602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.244642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.244772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.244797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.244948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.244973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.245115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.245143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.245332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.245357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.245520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.245548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.245742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.245770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.245929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.245955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.246113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.246141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.246308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.246336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.246502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.246527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.246654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.246696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.246860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.246893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.247065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.247094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.247245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.247287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.247416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.247444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.247635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.247660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.247861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.247894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.248025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.248052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.248199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.248224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.248368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.248393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.248531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.973 [2024-07-13 07:21:08.248558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.973 qpair failed and we were unable to recover it. 00:33:38.973 [2024-07-13 07:21:08.248726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.248751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.248885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.248911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.249032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.249058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.249231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.249256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.249421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.249448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.249618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.249646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.249792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.249817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.249970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.249995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.250136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.250162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.250280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.250306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.250446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.250488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.250654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.250678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.250852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.250882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.251021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.251049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.251236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.251264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.251435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.251460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.251586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.251628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.251796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.251820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.251978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.252005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.252128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.252153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.252272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.252297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.252437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.252462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.252571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.252612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.252786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.252814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.252971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.252997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.253140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.253165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.253338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.253366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.253532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.253556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.253718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.253746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.253912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.253941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.254087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.254112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.254259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.254305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.254437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.254464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.254610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.254636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.254780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.254820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.254985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.255015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.255189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.255214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.255359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.255400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.255565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.255591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.255777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.255805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.974 [2024-07-13 07:21:08.255955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.974 [2024-07-13 07:21:08.255981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.974 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.256134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.256175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.256349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.256375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.256551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.256576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.256746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.256776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.256947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.256972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.257092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.257117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.257289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.257330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.257466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.257493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.257642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.257668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.257819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.257862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.258063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.258088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.258242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.258267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.258419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.258463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.258635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.258660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.258829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.258857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.259028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.259058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.259213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.259237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.259388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.259413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.259604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.259629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.259774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.259798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.259929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.259973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.260160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.260187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.260354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.260379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.260541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.260569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.260739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.260764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.260915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.260940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.261057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.261082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.261238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.261263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.261416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.261441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.261611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.261639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.261811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.261843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.262040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.262066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.262223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.262251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.975 [2024-07-13 07:21:08.262460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.975 [2024-07-13 07:21:08.262485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.975 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.262630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.262655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.262822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.262849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.263013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.263040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.263206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.263231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.263383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.263408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.263553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.263594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.263768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.263793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.263939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.263966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.264111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.264136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.264285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.264312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.264485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.264513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.264700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.264728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.264922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.264948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.265078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.265107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.265268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.265296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.265463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.265488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.265635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.265661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.265809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.265852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.266008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.266033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.266176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.266201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.266342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.266369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.266544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.266569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.266760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.266788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.266950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.266978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.267148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.267173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.267347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.267374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.267539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.267568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.267766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.267793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.267962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.267987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.268108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.268133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.268294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.268319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.268446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.268472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.268625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.268650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.268814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.268838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.268987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.269012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.269180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.269208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.269353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.269382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.269510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.269535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.269673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.269701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.269851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.976 [2024-07-13 07:21:08.269881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.976 qpair failed and we were unable to recover it. 00:33:38.976 [2024-07-13 07:21:08.270011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.270036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.270208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.270233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.270357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.270382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.270506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.270531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.270702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.270729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.270895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.270920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.271040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.271067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.271219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.271244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.271394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.271419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.271588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.271616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.271787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.271815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.271993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.272019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.272179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.272207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.272342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.272370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.272514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.272539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.272662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.272687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.272839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.272879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.273032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.273057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.273179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.273204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.273358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.273383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.273557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.273582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.273749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.273777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.273939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.273965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.274110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.274135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.274324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.274352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.274514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.274542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.274684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.274709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.274833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.274859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.275035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.275079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.275259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.275284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.275435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.275460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.275585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.275611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.275785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.275811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.275930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.275955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.276072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.276097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.276244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.276269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.276431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.276459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.276601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.276629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.276792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.276817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.276966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.276992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.277188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.977 [2024-07-13 07:21:08.277216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.977 qpair failed and we were unable to recover it. 00:33:38.977 [2024-07-13 07:21:08.277411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.277436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.277556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.277582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.277710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.277735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.277926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.277952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.278126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.278168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.278346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.278372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.278491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.278516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.278664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.278707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.278873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.278901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.279100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.279126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.279315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.279342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.279501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.279528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.279666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.279691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.279901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.279929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.280090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.280119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.280244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.280271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.280444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.280469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.280660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.280688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.280816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.280844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.281037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.281065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.281238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.281263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.281408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.281450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.281606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.281638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.281797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.281824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.281976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.282002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.282152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.282177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.282328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.282370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.282558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.282586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.282748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.282774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.282943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.282971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.283136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.283163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.283359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.283384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.283501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.283526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.283650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.283676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.283870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.283899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.284033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.284060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.284237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.284262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.284454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.284482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.284635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.284663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.284825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.284853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.978 [2024-07-13 07:21:08.285030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.978 [2024-07-13 07:21:08.285055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.978 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.285205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.285230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.285375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.285417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.285605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.285632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.285777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.285802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.285977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.286002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.286175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.286200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.286374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.286416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.286565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.286590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.286768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.286813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.286972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.286998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.287141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.287166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.287323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.287348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.287491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.287534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.287687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.287715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.287840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.287879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.288047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.288073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.288219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.288261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.288421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.288448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.288607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.288635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.288809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.288834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.288962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.288988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.289179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.289211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.289374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.289402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.289594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.289619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.289779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.289807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.289960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.289989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.290148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.290176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.290340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.290365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.290513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.290537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.290685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.290726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.290911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.290939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.291078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.291103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.291213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.291237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.979 [2024-07-13 07:21:08.291437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.979 [2024-07-13 07:21:08.291465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.979 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.291669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.291693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.291843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.291872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.292046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.292074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.292261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.292289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.292430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.292456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.292575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.292600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.292745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.292771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.292916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.292958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.293114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.293142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.293338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.293363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.293483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.293508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.293654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.293681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.293821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.293850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.294020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.294046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.294165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.294205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.294394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.294422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.294585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.294612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.294776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.294805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.294953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.294979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.295122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.295163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.295328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.295355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.295499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.295524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.295672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.295715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.295886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.295914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.296116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.296141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.296259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.296284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.296433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.296459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.296647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.296679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.296812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.296840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.297010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.297035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.297153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.297196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.297348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.297375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.297504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.297531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.297697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.297721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.297892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.297921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.298064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.298092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.298229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.298258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.298403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.298428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.298618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.298646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.298798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.298825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.980 qpair failed and we were unable to recover it. 00:33:38.980 [2024-07-13 07:21:08.298993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.980 [2024-07-13 07:21:08.299021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.299190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.299215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.299338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.299380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.299583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.299608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.299751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.299776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.299926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.299952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.300155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.300183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.300349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.300374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.300523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.300549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.300661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.300686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.300808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.300834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.300962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.300987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.301133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.301159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.301311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.301337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.301487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.301512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.301659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.301702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.301899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.301925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.302049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.302074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.302214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.302255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.302412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.302439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.302570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.302598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.302750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.302778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.302976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.303001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.303120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.303161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.303349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.303377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.303546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.303570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.303721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.303746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.303892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.303940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.304115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.304141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.304287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.304312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.304456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.304481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.304680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.304708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.304846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.304884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.305030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.305055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.305245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.305273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.305433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.305462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.305625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.305654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.305800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.305825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.305984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.306009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.306158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.306183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.306335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.981 [2024-07-13 07:21:08.306377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.981 qpair failed and we were unable to recover it. 00:33:38.981 [2024-07-13 07:21:08.306544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.306570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.306762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.306790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.306922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.306951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.307122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.307150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.307324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.307349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.307472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.307499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.307674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.307702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.307893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.307918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.308066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.308091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.308236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.308261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.308400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.308425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.308622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.308649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.308804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.308832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.309013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.309039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.309183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.309207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.309351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.309379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.309520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.309547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.309699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.309741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.309940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.309966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.310126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.310151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.310298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.310323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.310492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.310519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.310676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.310704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.310827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.310856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.311054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.311079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.311244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.311272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.311436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.311468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.311655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.311683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.311856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.311897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.312039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.312067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.312208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.312236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.312408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.312436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.312603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.312628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.312754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.312797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.312929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.312957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.313093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.313121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.313310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.313335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.313505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.313533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.313722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.313750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.313910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.313939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.982 [2024-07-13 07:21:08.314117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.982 [2024-07-13 07:21:08.314143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.982 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.314305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.314335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.314522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.314550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.314706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.314733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.314880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.314905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.315050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.315093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.315282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.315310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.315442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.315469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.315660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.315685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.315887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.315916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.316071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.316099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.316230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.316259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.316448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.316473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.316644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.316672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.316821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.316849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.317041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.317069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.317236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.317261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.317388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.317413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.317534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.317559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.317700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.317728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.317921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.317946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.318112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.318140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.318328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.318356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.318519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.318548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.318719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.318744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.318894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.318936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.319099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.319132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.319292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.319320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.319515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.319540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.319672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.319699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.319859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.319893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.320082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.320110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.320283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.320308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.320467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.320494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.320621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.320648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.983 [2024-07-13 07:21:08.320823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.983 [2024-07-13 07:21:08.320849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.983 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.321039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.321064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.321236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.321264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.321394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.321422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.321584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.321614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.321803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.321831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.322010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.322035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.322225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.322252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.322404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.322432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.322570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.322596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.322770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.322795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.322957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.323001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.323160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.323188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.323346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.323371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.323512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.323558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.323679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.323706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.323890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.323919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.324114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.324139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.324301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.324329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.324491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.324518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.324672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.324700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.324860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.324893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.325069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.325097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.325227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.325256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.325392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.325419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.325561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.325586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.325725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.325751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.325886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.325912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.326117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.326142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.326316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.326342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.326475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.326503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.326628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.326660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.326817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.326845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.327050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.327076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.327248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.327276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.327474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.327499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.327622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.327647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.327786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.327811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.327989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.328018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.328181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.328209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.328365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.984 [2024-07-13 07:21:08.328393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.984 qpair failed and we were unable to recover it. 00:33:38.984 [2024-07-13 07:21:08.328526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.328552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.328703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.328744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.328930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.328959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.329132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.329160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.329333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.329358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.329549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.329577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.329711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.329739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.329877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.329905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.330098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.330123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.330244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.330269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.330435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.330462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.330626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.330654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.330815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.330843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.330998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.331023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.331189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.331217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.331345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.331373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.331507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.331532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.331679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.331721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.331902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.331927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.332051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.332076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.332224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.332249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.332361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.332402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.332592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.332620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.332757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.332785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.332946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.332972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.333164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.333192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.333345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.333373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.333501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.333529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.333662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.333687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.333840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.333870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.334045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.334074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.334251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.334279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.334425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.334449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.334621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.334660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.334833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.334859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.335009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.335050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.335253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.335278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.335465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.335493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.335678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.335705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.335858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.335902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.985 [2024-07-13 07:21:08.336048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.985 [2024-07-13 07:21:08.336073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.985 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.336186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.336211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.336389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.336417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.336551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.336579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.336759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.336784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.336927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.336953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.337117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.337145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.337331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.337359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.337520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.337545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.337704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.337732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.337855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.337889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.338052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.338077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.338256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.338281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.338455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.338483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.338616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.338644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.338806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.338834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.339001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.339027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.339174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.339214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.339368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.339396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.339556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.339585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.339743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.339771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.339946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.339971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.340094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.340121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.340300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.340328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.340494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.340519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.340675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.340703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.340870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.340899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.341084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.341111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.341284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.341309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.341471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.341498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.341687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.341719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.341881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.341909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.342073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.342097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.342265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.342293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.342474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.342499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.342645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.342671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.342850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.342880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.343039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.343081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.343257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.343282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.343441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.343466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.343675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.343700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.986 [2024-07-13 07:21:08.343815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.986 [2024-07-13 07:21:08.343842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.986 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.343956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d0480 is same with the state(5) to be set 00:33:38.987 [2024-07-13 07:21:08.344123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.344182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.344333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.344366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.344521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.344547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.344664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.344690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.344841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.344875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.345028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.345054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.345229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.345256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.345409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.345435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.345637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.345689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.345828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.345857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.346059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.346087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.346274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.346303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.346585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.346636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.346803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.346829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.347001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.347027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.347189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.347220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.347764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.347795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.347996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.348023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.348197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.348222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.348376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.348402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.348598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.348627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.348776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.348806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.348987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.349024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.349146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.349189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.349381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.349428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.349630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.349655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.349824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.349854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.350009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.350035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.350195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.350220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.350373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.350399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.350591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.350619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.350771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.350796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.350964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.350990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.351156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.351217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.351400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.351427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.351604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.351652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.351817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.351845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.352021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.352047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.352163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.352189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.352339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.987 [2024-07-13 07:21:08.352367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.987 qpair failed and we were unable to recover it. 00:33:38.987 [2024-07-13 07:21:08.352535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.352561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.352720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.352759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.352919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.352946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.353096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.353122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.353276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.353301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.353420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.353445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.353622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.353648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.353766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.353792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.353949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.353975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.354092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.354117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.354269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.354296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.354507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.354558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.354732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.354757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.354920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.354946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.355071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.355097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.355253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.355278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.355447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.355475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.355642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.355667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.355791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.355816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.355986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.356012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.356138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.356182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.356332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.356357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.356502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.356527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.356651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.356676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.356823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.356851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.357006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.357031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.357175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.357200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.357414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.357439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.357615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.357644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.357835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.357882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.358015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.358043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.358165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.358207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.358386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.358411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.358532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.988 [2024-07-13 07:21:08.358558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.988 qpair failed and we were unable to recover it. 00:33:38.988 [2024-07-13 07:21:08.358712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.358756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.358898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.358941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.359063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.359089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.359255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.359284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.359482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.359509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.359658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.359684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.359824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.359854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.360036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.360067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.360229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.360254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.360431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.360456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.360608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.360638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.360877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.360922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.361047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.361075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.361207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.361236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.361395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.361420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.361544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.361569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.361717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.361742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.361893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.361919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.362041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.362068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.362199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.362224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.362349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.362376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.362501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.362527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.362647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.362672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.362817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.362842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.362974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.362999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.363124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.363152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.363301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.363328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.363462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.363489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.363639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.363665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.363787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.363812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.363943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.363970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.364091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.364117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.364271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.364296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.364446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.364471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.364600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.364631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.364754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.364779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.364915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.364940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.365057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.365083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.365210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.365236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.365388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.365413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.989 [2024-07-13 07:21:08.365562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.989 [2024-07-13 07:21:08.365589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.989 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.365737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.365766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.365921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.365952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.366074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.366100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.366246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.366272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.366444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.366469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.366592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.366618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.366739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.366769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.366920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.366946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.367081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.367107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.367261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.367287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.367399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.367425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.367545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.367571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.367716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.367743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.367927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.367953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.368080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.368105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.368231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.368258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.368381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.368406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.368558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.368589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.368761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.368786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.368919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.368945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.369072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.369097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.369222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.369246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.369388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.369413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.369539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.369567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.369711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.369736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.369858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.369898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.370033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.370061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.370215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.370241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.370397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.370423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.370604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.370631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.370783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.370808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.370954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.370984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.371115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.371144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.371328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.371353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.371471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.371496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.371614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.371639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.371788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.371816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.372000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.372026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.372144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.372169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.372287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.372312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.990 qpair failed and we were unable to recover it. 00:33:38.990 [2024-07-13 07:21:08.372437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.990 [2024-07-13 07:21:08.372463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.372588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.372613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.372764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.372790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.372953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.372979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.373102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.373127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.373248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.373273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.373444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.373473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.373597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.373622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.373735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.373760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.373921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.373947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.374103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.374134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.374292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.374319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.374447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.374473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.374595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.374621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.374765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.374791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.374916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.374942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.375096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.375122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.375303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.375329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.375452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.375478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.375656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.375683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.375809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.375835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.375968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.375993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.376145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.376171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.376285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.376311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.376429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.376454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.376576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.376601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.376747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.376772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.376918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.376944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.377070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.377095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.377226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.377251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.377361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.377386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.377565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.377592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.377744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.377771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.377915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.377943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.378059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.378085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.378201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.378227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.378375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.378402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.378572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.378598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.378758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.378785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.378940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.378967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.379121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.991 [2024-07-13 07:21:08.379164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.991 qpair failed and we were unable to recover it. 00:33:38.991 [2024-07-13 07:21:08.379300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.379326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.379472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.379498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.379644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.379670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.379791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.379816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.380000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.380026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.380183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.380212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.380333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.380358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.380481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.380507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.380654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.380680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.380800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.380827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.380984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.381010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.381133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.381159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.381275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.381301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.381454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.381480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.381627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.381653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.381826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.381851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.381985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.382011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.382131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.382158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.382321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.382347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.382497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.382526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.382660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.382688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.382863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.382903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.383017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.383043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.383172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.383200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.383320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.383345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.383463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.383487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.383628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.383653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.383813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.383838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.383959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.383986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.384111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.384136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.384280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.384305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.384454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.384479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.384667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.384730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.384880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.384908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.992 [2024-07-13 07:21:08.385036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.992 [2024-07-13 07:21:08.385062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.992 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.385189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.385216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.385363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.385389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.385512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.385537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.385655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.385681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.385795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.385820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.385950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.385977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.386102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.386128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.386250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.386280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.386430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.386457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.386583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.386611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.386762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.386796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.386952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.386978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.387130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.387154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.387307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.387332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.387513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.387538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.387667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.387694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.387854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.387885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.388009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.388035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.388159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.388186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.388334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.388359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.388507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.388534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.388656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.388682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.388835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.388861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.388998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.389024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.389209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.389234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.389358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.389383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.389558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.389583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.389728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.389753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.389893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.389918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.390068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.390093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.390214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.390239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.390358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.390383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.390506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.390530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.390673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.390698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.390823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.390850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.390977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.391002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 [2024-07-13 07:21:08.391157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.993 [2024-07-13 07:21:08.391182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:38.993 qpair failed and we were unable to recover it. 00:33:38.993 Read completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Read completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Read completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Read completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Read completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Read completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Read completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Read completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Write completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Read completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Write completed with error (sct=0, sc=8) 00:33:38.993 starting I/O failed 00:33:38.993 Write completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Write completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Write completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Write completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Write completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Write completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Write completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 Read completed with error (sct=0, sc=8) 00:33:38.994 starting I/O failed 00:33:38.994 [2024-07-13 07:21:08.391510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:38.994 [2024-07-13 07:21:08.391720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.994 [2024-07-13 07:21:08.391757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:38.994 qpair failed and we were unable to recover it. 00:33:38.994 [2024-07-13 07:21:08.391891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.994 [2024-07-13 07:21:08.391919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.994 qpair failed and we were unable to recover it. 00:33:38.994 [2024-07-13 07:21:08.392049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.994 [2024-07-13 07:21:08.392075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:38.994 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.392197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.392241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.392424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.392453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.392590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.392619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.392805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.392834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.393021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.393047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.393197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.393222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.393339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.393363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.393619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.393670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.393798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.393825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.394002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.394027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.394175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.394200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.394355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.394380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.394524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.394553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.394711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.394740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.394900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.394942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.395093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.395118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.288 qpair failed and we were unable to recover it. 00:33:39.288 [2024-07-13 07:21:08.395335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.288 [2024-07-13 07:21:08.395390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.395525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.395552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.395715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.395743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.395929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.395954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.396080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.396105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.396258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.396283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.396452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.396481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.396653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.396681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.396826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.396851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.396976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.397001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.397113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.397154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.397288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.397315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.397472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.397500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.397635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.397663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.397847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.397885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.398051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.398076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.398222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.398250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.398416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.398443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.398606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.398635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.398778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.398821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.398987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.399016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.399221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.399272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.399480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.399526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.399664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.399693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.399882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.399909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.400031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.400056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.400236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.400279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.400430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.400464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.400656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.400707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.400843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.400890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.401038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.401065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.401189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.401232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.401369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.401397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.401559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.401587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.401750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.401780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.401946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.401972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.402096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.402123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.402264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.402291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.402413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.402440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.402617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.402646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.289 [2024-07-13 07:21:08.402791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.289 [2024-07-13 07:21:08.402816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.289 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.402987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.403013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.403135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.403169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.403289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.403333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.403473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.403501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.403677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.403705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.403863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.403898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.404037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.404062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.404274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.404320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.404482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.404531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.404668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.404696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.404834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.404861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.405018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.405043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.405191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.405217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.405388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.405420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.405603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.405631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.405789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.405817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.405982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.406007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.406129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.406154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.406327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.406352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.406565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.406618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.406790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.406818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.407001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.407027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.407150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.407175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.407296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.407321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.407462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.407491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.407725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.407780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.407954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.407993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.408175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.408205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.408416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.408467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.408633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.408661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.408802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.408831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.408990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.409016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.409162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.409191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.409349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.409377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.409509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.409539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.409708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.409737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.409937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.409963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.410111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.410137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.410341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.410370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.410504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.290 [2024-07-13 07:21:08.410533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.290 qpair failed and we were unable to recover it. 00:33:39.290 [2024-07-13 07:21:08.410697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.410728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.410898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.410927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.411043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.411068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.411213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.411238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.411411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.411451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.411658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.411685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.411842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.411885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.412060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.412085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.412258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.412285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.412460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.412485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.412667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.412716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.412837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.412880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.413071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.413097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.413295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.413348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.413491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.413526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.413688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.413715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.413889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.413915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.414058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.414083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.414202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.414227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.414447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.414492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.414624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.414652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.414800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.414827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.415011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.415036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.415184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.415209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.415327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.415352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.415525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.415567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.415737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.415764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.415934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.415974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.416132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.416159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.416286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.416313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.416457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.416484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.416692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.416719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.416880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.416907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.417032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.417058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.417204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.417230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.417363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.417389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.417573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.417600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.417769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.417794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.417943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.417968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.418087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.418111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.291 [2024-07-13 07:21:08.418272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.291 [2024-07-13 07:21:08.418298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.291 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.418448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.418474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.418651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.418679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.418851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.418893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.419029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.419055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.419179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.419205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.419361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.419387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.419539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.419568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.419771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.419796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.419937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.419962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.420111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.420136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.420317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.420348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.420526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.420578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.420770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.420802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.420963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.420989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.421138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.421163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.421373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.421423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.421567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.421595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.421753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.421780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.421972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.421998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.422119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.422143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.422280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.422304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.422460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.422488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.422627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.422654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.422831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.422856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.423017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.423043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.423163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.423187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.423324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.423348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.423489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.423516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.423681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.423707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.423846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.423878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.424020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.424045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.424190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.424214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.292 [2024-07-13 07:21:08.424341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.292 [2024-07-13 07:21:08.424365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.292 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.424544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.424572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.424703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.424732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.424890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.424914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.425043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.425069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.425210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.425234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.425380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.425404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.425576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.425632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.425768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.425796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.425941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.425969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.426121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.426149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.426316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.426361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.426562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.426609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.426765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.426799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.426951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.426978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.427110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.427137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.427347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.427377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.427533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.427577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.427728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.427758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.427888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.427915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.428049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.428089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.428257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.428285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.428432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.428477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.428646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.428674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.428801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.428827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.428961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.428991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.429123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.429149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.429308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.429334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.429488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.429516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.429651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.429677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.429807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.429833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.429979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.430009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.430183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.430230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.430376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.430403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.430526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.430552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.430695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.430732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.430949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.430977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.431127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.431152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.431295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.431321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.431504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.431529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.431691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.293 [2024-07-13 07:21:08.431716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.293 qpair failed and we were unable to recover it. 00:33:39.293 [2024-07-13 07:21:08.431939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.431966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.432101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.432127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.432308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.432336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.432504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.432554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.432740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.432764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.432921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.432946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.433065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.433096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.433232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.433257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.433399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.433426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.433659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.433709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.433891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.433917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.434071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.434096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.434239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.434266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.434507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.434558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.434735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.434762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.434944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.434970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.435121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.435145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.435284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.435308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.435478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.435506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.435669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.435697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.435948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.435975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.436097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.436121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.436293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.436318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.436467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.436492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.436648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.436673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.436833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.436861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.437039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.437064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.437184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.437209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.437355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.437380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.437522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.437547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.437686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.437712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.437852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.437890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.438030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.438056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.438183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.438212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.438362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.438387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.438535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.438577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.438794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.438822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.438977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.439002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.439126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.439151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.439288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.439317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.439477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.294 [2024-07-13 07:21:08.439504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.294 qpair failed and we were unable to recover it. 00:33:39.294 [2024-07-13 07:21:08.439725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.439752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.439931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.439957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.440108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.440135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.440305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.440333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.440468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.440495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.440661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.440689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.440831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.440859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.441006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.441031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.441186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.441211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.441368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.441411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.441593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.441638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.441812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.441837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.442009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.442049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.442214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.442241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.442371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.442396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.442563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.442590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.442737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.442765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.442978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.443004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.443127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.443153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.443293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.443323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.443449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.443473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.443592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.443618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.443800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.443825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.443963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.443989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.444105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.444130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.444279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.444306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.444463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.444490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.444645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.444674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.444850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.444891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.445009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.445034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.445158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.445183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.445328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.445356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.445514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.445541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.445705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.445732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.445903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.445929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.446045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.446070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.446189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.446214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.446408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.446436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.446559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.446587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.446721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.446749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.446892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.446917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.295 qpair failed and we were unable to recover it. 00:33:39.295 [2024-07-13 07:21:08.447068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.295 [2024-07-13 07:21:08.447093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.447254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.447279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.447422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.447447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.447645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.447672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.447811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.447837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.447983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.448010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.448135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.448159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.448331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.448355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.448480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.448506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.448655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.448682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.448835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.448862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.449034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.449059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.449188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.449213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.449357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.449382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.449534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.449558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.449758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.449786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.449957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.449996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.450143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.450187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.450324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.450351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.450611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.450667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.450812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.450838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.450969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.450994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.451149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.451174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.451368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.451416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.451598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.451640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.451807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.451835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.451985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.452010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.452160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.452184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.452308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.452333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.452475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.452503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.452635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.452663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.452848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.452922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.453058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.453086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.453271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.453302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.453434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.453462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.453623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.453651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.453816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.453844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.453998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.454025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.454184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.454209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.454334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.454360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.454488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.454513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.296 [2024-07-13 07:21:08.454692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.296 [2024-07-13 07:21:08.454720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.296 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.454872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.454897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.455044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.455068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.455210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.455238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.455369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.455396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.455558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.455586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.455735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.455763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.455910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.455936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.456063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.456088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.456222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.456258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.456383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.456408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.456535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.456560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.456707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.456734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.456929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.456955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.457080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.457106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.457253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.457278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.457408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.457433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.457584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.457609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.457792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.457816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.457960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.457985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.458108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.458134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.458313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.458341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.458479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.458508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.458643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.458671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.458801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.458829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.459007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.459046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.459176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.459221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.459377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.459405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.459554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.459580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.459719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.459744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.459899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.459926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.460047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.460073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.460203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.460242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.460367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.460394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.460545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.460572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.297 [2024-07-13 07:21:08.460723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.297 [2024-07-13 07:21:08.460749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.297 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.460883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.460916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.461042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.461068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.461191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.461216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.461354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.461378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.461529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.461571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.461716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.461747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.461894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.461921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.462072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.462098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.462227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.462254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.462388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.462414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.462596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.462622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.462771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.462797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.462927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.462954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.463079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.463107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.463254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.463280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.463419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.463445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.463581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.463620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.463816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.463859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.464012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.464039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.464223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.464249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.464403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.464427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.464544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.464568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.464721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.464747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.464881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.464920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.465081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.465108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.465277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.465302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.465419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.465443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.465590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.465615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.465743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.465767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.465920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.465946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.466074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.466099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.466229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.466254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.466427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.466465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.466618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.466645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.466840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.466907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.467041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.467069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.467198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.467230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.467381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.467408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.467543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.467571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.467717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.467746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.298 [2024-07-13 07:21:08.467894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.298 [2024-07-13 07:21:08.467937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.298 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.468095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.468121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.468242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.468268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.468441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.468465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.468610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.468635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.468750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.468775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.468925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.468950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.469103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.469129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.469257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.469282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.469405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.469430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.469552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.469578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.469712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.469736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.469880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.469907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.470029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.470054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.470176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.470201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.470338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.470363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.470491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.470515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.470649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.470677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.470841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.470874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.471021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.471046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.471187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.471211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.471354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.471379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.471537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.471561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.471719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.471744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.471905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.471931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.472103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.472128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.472278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.472302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.472426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.472452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.472596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.472623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.472767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.472796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.472987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.473026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.473175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.473205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.473355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.473380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.473531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.473556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.473670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.473695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.473883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.473927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.474077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.474107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.474256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.474281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.474426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.474451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.474576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.474604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.474734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.474759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.299 qpair failed and we were unable to recover it. 00:33:39.299 [2024-07-13 07:21:08.474882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.299 [2024-07-13 07:21:08.474907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.475023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.475048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.475195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.475219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.475414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.475461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.475592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.475621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.475794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.475820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.475956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.475981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.476102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.476127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.476244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.476268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.476421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.476448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.476633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.476661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.476801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.476829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.476975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.477001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.477144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.477170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.477296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.477320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.477444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.477468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.477617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.477643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.477790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.477815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.477973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.477999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.478149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.478174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.478330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.478355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.478506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.478531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.478658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.478687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.478820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.478845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.478974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.479001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.479126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.479150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.479268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.479293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.479412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.479438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.479561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.479585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.479731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.479757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.479918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.479944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.480076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.480101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.480242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.480268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.480389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.480414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.480556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.480580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.480696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.480722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.480851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.480882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.481008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.481032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.481179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.481204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.481350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.481374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.481491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.300 [2024-07-13 07:21:08.481516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.300 qpair failed and we were unable to recover it. 00:33:39.300 [2024-07-13 07:21:08.481653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.481681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.481915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.481940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.482086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.482111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.482235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.482278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.482440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.482465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.482582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.482607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.482723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.482748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.482892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.482917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.483071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.483097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.483227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.483251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.483405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.483429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.483556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.483581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.483756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.483781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.483916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.483941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.484062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.484087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.484207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.484232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.484345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.484370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.484515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.484557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.484690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.484717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.484858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.484889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.485040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.485065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.485186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.485214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.485337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.485363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.485511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.485535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.485656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.485681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.485818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.485843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.485995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.486021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.486161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.486186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.486303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.486329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.486501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.486529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.486676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.486719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.486895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.486921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.487043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.487070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.487220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.487244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.487390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.487414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.301 qpair failed and we were unable to recover it. 00:33:39.301 [2024-07-13 07:21:08.487565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.301 [2024-07-13 07:21:08.487591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.487748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.487773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.487917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.487943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.488092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.488117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.488242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.488266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.488425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.488451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.488604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.488629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.488826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.488854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.489001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.489026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.489201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.489226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.489414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.489439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.489582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.489607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.489725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.489749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.489896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.489922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.490072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.490096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.490217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.490260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.490435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.490460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.490632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.490657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.490823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.490850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.491000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.491024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.491209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.491234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.491404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.491433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.491595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.491624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.491773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.491798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.491936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.491962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.492083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.492107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.492239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.492268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.492434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.492462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.492622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.492650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.492818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.492843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.492990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.493017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.493143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.493169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.493367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.493392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.493587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.493616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.493781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.493810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.493966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.493990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.494110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.494135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.494256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.494282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.494409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.494434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.494596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.494625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.494821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.302 [2024-07-13 07:21:08.494846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.302 qpair failed and we were unable to recover it. 00:33:39.302 [2024-07-13 07:21:08.495034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.495059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.495206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.495233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.495399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.495428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.495614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.495639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.495828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.495856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.496024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.496052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.496193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.496219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.496372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.496397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.496548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.496573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.496716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.496743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.496941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.496967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.497088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.497115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.497237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.497262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.497426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.497454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.497612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.497640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.497802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.497826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.497963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.497988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.498131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.498175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.498364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.498389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.498588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.498614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.498747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.498774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.498923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.498950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.499139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.499166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.499331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.499358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.499499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.499524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.499648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.499676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.499854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.499883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.500078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.500102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.500229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.500254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.500427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.500451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.500600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.500624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.500815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.500843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.501018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.501043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.501217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.501241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.501406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.501434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.501597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.501624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.501786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.501816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.501994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.502020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.502160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.502185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.502398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.502423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.502599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.303 [2024-07-13 07:21:08.502627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.303 qpair failed and we were unable to recover it. 00:33:39.303 [2024-07-13 07:21:08.502818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.502846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.503016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.503040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.503167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.503192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.503343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.503370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.503546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.503570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.503737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.503764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.503935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.503960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.504113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.504139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.504305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.504333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.504495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.504524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.504699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.504724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.504895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.504925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.505057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.505085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.505258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.505283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.505449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.505479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.505617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.505644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.505787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.505811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.505962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.505988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.506110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.506136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.506311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.506336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.506480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.506510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.506711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.506739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.506894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.506919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.507097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.507141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.507330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.507362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.507530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.507555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.507719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.507747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.507887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.507915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.508084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.508109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.508302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.508330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.508489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.508517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.508705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.508733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.508893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.508936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.509060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.509084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.509227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.509251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.509374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.509399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.509603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.509631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.509773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.509799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.509982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.510026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.510186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.510214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.304 [2024-07-13 07:21:08.510388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.304 [2024-07-13 07:21:08.510413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.304 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.510605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.510633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.510820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.510847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.510990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.511016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.511163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.511189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.511368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.511395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.511560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.511584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.511721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.511764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.511920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.511946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.512108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.512133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.512311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.512338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.512526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.512555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.512750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.512775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.512907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.512935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.513101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.513128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.513271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.513296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.513441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.513466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.513640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.513668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.513861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.513898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.514085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.514114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.514245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.514274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.514445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.514469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.514657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.514685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.514847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.514881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.515058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.515087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.515222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.515246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.515367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.515393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.515541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.515566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.515719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.515747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.515933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.515958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.516102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.516127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.516245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.516286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.516443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.516472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.516647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.516672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.516873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.516901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.517091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.305 [2024-07-13 07:21:08.517120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.305 qpair failed and we were unable to recover it. 00:33:39.305 [2024-07-13 07:21:08.517293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.517318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.517489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.517517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.517699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.517727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.517916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.517942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.518103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.518128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.518275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.518301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.518463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.518488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.518661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.518686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.518858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.518892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.519066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.519091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.519283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.519310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.519474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.519501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.519650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.519675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.519842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.519875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.520036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.520064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.520212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.520237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.520383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.520408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.520579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.520606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.520795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.520819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.520936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.520962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.521082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.521107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.521236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.521260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.521402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.521427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.521629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.521656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.521862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.521892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.522061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.522088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.522256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.522283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.522477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.522502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.522696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.522729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.522894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.522931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.523100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.523125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.523285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.523312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.523503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.523527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.523677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.523702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.523860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.523910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.524711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.524745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.524929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.524956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.525123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.525153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.525315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.525342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.525494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.525519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.306 [2024-07-13 07:21:08.525657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.306 [2024-07-13 07:21:08.525699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.306 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.525830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.525857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.526167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.526197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.526397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.526426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.526559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.526587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.526753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.526778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.526941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.526970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.527141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.527166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.527323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.527348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.527512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.527542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.527677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.527707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.527884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.527910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.528033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.528076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.528243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.528272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.528410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.528436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.528631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.528660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.528798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.528826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.529014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.529041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.529236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.529264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.529435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.529460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.529609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.529634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.529826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.529854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.530030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.530055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.530209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.530234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.530380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.530404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.530605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.530633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.530792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.530821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.530996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.531023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.531143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.531183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.531302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.531328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.531476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.531501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.531668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.531696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.531842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.531873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.532005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.532029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.532161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.532190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.532373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.532398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.532560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.532588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.532782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.532807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.532969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.532994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.533125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.533154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.533305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.533333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.307 qpair failed and we were unable to recover it. 00:33:39.307 [2024-07-13 07:21:08.533499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.307 [2024-07-13 07:21:08.533524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.533659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.533683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.533830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.533855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.534053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.534079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.534210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.534235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.534416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.534444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.534641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.534667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.534840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.534876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.535054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.535080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.535230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.535255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.535392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.535421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.535581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.535609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.535774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.535801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.535962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.535988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.536122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.536148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.536295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.536320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.536500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.536528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.536692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.536719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.536893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.536919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.537068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.537093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.537274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.537303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.537450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.537477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.537599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.537625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.537794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.537819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.537986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.538012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.538194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.538220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.538401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.538426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.308 [2024-07-13 07:21:08.538598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.308 [2024-07-13 07:21:08.538626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.308 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.538798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.538826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.538989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.539018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.539181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.539207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.539373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.539400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.539565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.539594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.539788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.539814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.539996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.540025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.540152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.540179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.540345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.540371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.540542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.540571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.540696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.540724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.540871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.540897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.541047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.541089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.541237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.541264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.541466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.541490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.541618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.541643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.541789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.541814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.541951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.541976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.542154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.542189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.542345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.542373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.542522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.542547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.542672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.542697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.542848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.542904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.543098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.543123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.543298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.543325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.543467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.543494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.543640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.543667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.543791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.543817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.544014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.544042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.544234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.544260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.544407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.544433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.544602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.544630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.544800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.544826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.545002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.545027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.545200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.545227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.545428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.545454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.545592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.545622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.545782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.545809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.546014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.546040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.546194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.309 [2024-07-13 07:21:08.546223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.309 qpair failed and we were unable to recover it. 00:33:39.309 [2024-07-13 07:21:08.546377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.546403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.546526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.546550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.546714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.546742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.546882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.546912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.547084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.547109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.547307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.547335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.547500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.547528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.547694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.547719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.547875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.547900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.548025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.548066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.548264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.548289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.548440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.548466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.548652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.548681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.548848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.548886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.549056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.549081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.549214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.549242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.549417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.549441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.549612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.549640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.549822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.549850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.550032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.550059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.550174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.550199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.550389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.550413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.550589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.550614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.550807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.550835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.551018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.551044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.551188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.551213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.551393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.551421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.551619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.551645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.551794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.551820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.551942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.551967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.552117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.552144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.552314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.552354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.552534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.552581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.552750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.552795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.552934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.552962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.553112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.553138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.553307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.553350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.553532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.553561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.553726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.553752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.553894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.310 [2024-07-13 07:21:08.553926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.310 qpair failed and we were unable to recover it. 00:33:39.310 [2024-07-13 07:21:08.554100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.554143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.554312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.554355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.554554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.554583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.554747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.554772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.554973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.555016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.555185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.555229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.555388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.555432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.555605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.555631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.555802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.555828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.555981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.556025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.556231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.556275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.556451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.556495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.556638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.556664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.556819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.556845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.557069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.557114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.557289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.557337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.557480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.557524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.557702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.557728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.557846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.557889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.558063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.558088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.558234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.558277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.558424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.558450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.558601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.558627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.558753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.558779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.558971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.559016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.559203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.559246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.559397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.559427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.559646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.559697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.559858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.559890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.560043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.560068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.560276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.560304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.560594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.560642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.560804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.560832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.561009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.561037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.561206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.561249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.561420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.561463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.561691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.561736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.561861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.561896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.562053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.562078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.562271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.562314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.311 [2024-07-13 07:21:08.562515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.311 [2024-07-13 07:21:08.562541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.311 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.562691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.562717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.562935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.562962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.563100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.563143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.563321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.563366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.563535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.563578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.563723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.563750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.563917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.563971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.564095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.564121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.564286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.564333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.564474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.564500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.564665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.564705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.564860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.564891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.565102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.565129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.565262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.565291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.565495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.565546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.565708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.565736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.565942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.565969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.566135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.566184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.566379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.566422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.566680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.566731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.566929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.566958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.567144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.567191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.567368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.567411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.567602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.567654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.567800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.567825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.567998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.568050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.568229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.568274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.568439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.568481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.568624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.568649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.568758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.568783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.568981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.569010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.569192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.569235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.569376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.569405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.312 [2024-07-13 07:21:08.569595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.312 [2024-07-13 07:21:08.569620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.312 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.569799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.569827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.570034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.570063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.570204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.570232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.570390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.570418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.570592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.570637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.570787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.570814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.570970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.570997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.571169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.571216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.571383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.571426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.571596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.571640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.571817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.571842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.572008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.572052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.572229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.572276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.572444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.572486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.572655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.572683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.572850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.572883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.573025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.573053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.573241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.573268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.573401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.573435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.573595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.573624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.573765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.573792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.573965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.573990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.574157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.574186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.574323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.574351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.574531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.574558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.574691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.574718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.574901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.574927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.575049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.575074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.575235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.575263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.575425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.575453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.575635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.575662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.575836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.575872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.576022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.576047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.576222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.576247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.576382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.576409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.576561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.576586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.576767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.576794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.576945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.576970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.577091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.577133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.577294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.577322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.313 qpair failed and we were unable to recover it. 00:33:39.313 [2024-07-13 07:21:08.577455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.313 [2024-07-13 07:21:08.577482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.577616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.577658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.577792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.577817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.577979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.578004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.578119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.578161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.578328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.578361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.578525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.578553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.578718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.578745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.578871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.578900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.579036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.579061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.579209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.579236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.579423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.579450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.579611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.579638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.579801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.579829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.579983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.580008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.580200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.580228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.580402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.580430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.580589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.580646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.580821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.580846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.580981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.581006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.581149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.581174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.581321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.581345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.581519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.581546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.581709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.581737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.581908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.581951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.582075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.582101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.582286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.582311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.582444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.582471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.582605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.582633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.582796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.582823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.583001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.583026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.583153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.583196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.583394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.583423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.583623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.583651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.583788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.583817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.584013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.584038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.584178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.584206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.584341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.584369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.584534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.584562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.584729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.584757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.584890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.314 [2024-07-13 07:21:08.584933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.314 qpair failed and we were unable to recover it. 00:33:39.314 [2024-07-13 07:21:08.585054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.585079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.585190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.585216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.585332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.585373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.585563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.585591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.585750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.585778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.585977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.586003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.586128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.586153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.586317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.586346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.586476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.586505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.586661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.586690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.586813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.586841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.586983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.587009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.587152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.587177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.587300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.587325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.587499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.587524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.587662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.587687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.587887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.587915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.588072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.588100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.588264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.588289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.588449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.588477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.588639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.588667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.588832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.588857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.588986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.589028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.589196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.589224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.589388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.589414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.589577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.589605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.589768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.589796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.589974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.590000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.590121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.590147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.590316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.590343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.590508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.590533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.590652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.590677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.590826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.590855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.591018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.591044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.591191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.591233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.591424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.591451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.591622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.591647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.591775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.591799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.591922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.591948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.592093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.592118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.315 [2024-07-13 07:21:08.592236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.315 [2024-07-13 07:21:08.592261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.315 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.592398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.592424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.592609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.592634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.592754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.592795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.592955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.592981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.593104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.593129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.593327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.593354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.593552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.593577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.593761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.593785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.593954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.593982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.594131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.594156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.594306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.594330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.594449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.594473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.594627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.594651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.594886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.594910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.595058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.595098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.595265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.595294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.595458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.595482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.595605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.595629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.595754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.595783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.595903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.595928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.596057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.596082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.596226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.596251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.596374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.596399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.596533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.596558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.596683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.596708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.596858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.596888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.597008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.597033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.597203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.597228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.597373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.597399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.597554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.597579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.597703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.597727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.597880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.597906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.598063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.598088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.598238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.598263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.598409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.598435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.598557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.598582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.598705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.598730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.598853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.598892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.599019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.599044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.599172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.599197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.599319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.316 [2024-07-13 07:21:08.599344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.316 qpair failed and we were unable to recover it. 00:33:39.316 [2024-07-13 07:21:08.599468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.599495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.599620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.599645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.599766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.599791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.599972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.599998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.600122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.600151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.600276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.600302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.600423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.600449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.600574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.600599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.600742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.600767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.600895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.600920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.601038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.601063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.601213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.601238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.601385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.601410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.601527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.601552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.601699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.601725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.601856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.601887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.602035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.602061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.602181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.602208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.602343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.602369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.602539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.602567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.602823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.602851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.603022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.603047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.603167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.603192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.603314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.603339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.603509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.603534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.603677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.603703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.603823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.603848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.604003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.604029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.604159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.604184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.604333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.604358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.604502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.604527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.604646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.604672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.604795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.604820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.604972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.604998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.317 qpair failed and we were unable to recover it. 00:33:39.317 [2024-07-13 07:21:08.605129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.317 [2024-07-13 07:21:08.605154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.605273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.605298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.605427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.605453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.605576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.605601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.605747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.605772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.605924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.605950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.606096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.606121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.606264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.606290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.606442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.606467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.606618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.606642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.606791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.606816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.606952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.606978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.607151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.607176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.607299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.607324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.607467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.607492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.607651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.607676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.607829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.607857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.608062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.608087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.608213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.608237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.608358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.608383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.608503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.608529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.608651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.608676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.608822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.608848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.609009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.609035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.609172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.609196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.609317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.609342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.609456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.609482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.609628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.609652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.609824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.609849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.609983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.610008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.610162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.610186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.610306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.610331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.610464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.610489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.610638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.610663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.610788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.610813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.610980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.611005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.611152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.611177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.611326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.611351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.611469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.611498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.611643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.611669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.611814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.318 [2024-07-13 07:21:08.611839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.318 qpair failed and we were unable to recover it. 00:33:39.318 [2024-07-13 07:21:08.611982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.612008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.612158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.612183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.612353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.612378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.612543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.612568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.612684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.612709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.612857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.612887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.613059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.613084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.613208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.613234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.613380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.613405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.613550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.613575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.613718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.613743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.613902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.613928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.614046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.614071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.614228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.614253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.614374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.614399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.614591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.614618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.614802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.614827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.614962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.614988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.615139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.615164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.615290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.615315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.615434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.615459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.615604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.615629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.615761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.615786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.615931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.615957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.616133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.616173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.616326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.616351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.616468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.616493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.616667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.616691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.616842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.616871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.616998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.617023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.617195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.617220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.617339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.617364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.617484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.617511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.617662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.617687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.617834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.617859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.618009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.618034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.618156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.618181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.618298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.618323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.618464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.618490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.618633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.618658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.618786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.319 [2024-07-13 07:21:08.618812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.319 qpair failed and we were unable to recover it. 00:33:39.319 [2024-07-13 07:21:08.618953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.618979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.619097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.619122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.619275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.619301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.619472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.619497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.619656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.619681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.619835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.619861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.620002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.620028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.620176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.620201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.620351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.620376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.620523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.620548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.620668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.620697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.620821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.620846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.621017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.621058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.621192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.621219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.621373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.621418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.621560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.621603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.621761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.621788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.621939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.621967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.622083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.622109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.622260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.622285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.622430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.622457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.622589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.622617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.622777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.622805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.622983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.623008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.623179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.623207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.623336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.623364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.623500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.623528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.623700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.623725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.623839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.623869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.624021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.624046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.624196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.624224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.624400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.624428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.624550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.624578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.624764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.624792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.624946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.624972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.625095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.625120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.625287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.625315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.625471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.625499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.625640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.625668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.625812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.625837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.625968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.320 [2024-07-13 07:21:08.625993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.320 qpair failed and we were unable to recover it. 00:33:39.320 [2024-07-13 07:21:08.626144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.626169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.626312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.626340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.626497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.626525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.626686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.626714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.626851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.626883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.627048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.627073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.627191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.627216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.627391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.627419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.627549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.627576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.627715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.627744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.627935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.627965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.628117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.628145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.628310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.628355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.628532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.628580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.628732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.628757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.628899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.628935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.629109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.629155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.629329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.629383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.629547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.629594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.629785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.629812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.629965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.629991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.630132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.630160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.630345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.630387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.630573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.630606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.630761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.630786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.630934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.630960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.631096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.631125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.631259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.631286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.631513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.631560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.631705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.631733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.631878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.631922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.632041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.632066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.632181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.632207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.632346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.632374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.632531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.632559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.321 qpair failed and we were unable to recover it. 00:33:39.321 [2024-07-13 07:21:08.632724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.321 [2024-07-13 07:21:08.632749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.632873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.632899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.633032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.633058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.633173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.633214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.633376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.633404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.633558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.633589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.633749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.633777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.633959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.633997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.634118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.634144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.634286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.634314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.634444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.634472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.634689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.634717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.634884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.634927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.635052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.635078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.635228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.635257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.635424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.635452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.635607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.635650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.635792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.635816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.635957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.635983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.636134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.636160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.636329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.636381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.636513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.636541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.636718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.636743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.636904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.636947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.637068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.637093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.637247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.637274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.637405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.637433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.637594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.637622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.637787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.637814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.637967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.637998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.638141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.638168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.638346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.638374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.638557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.638586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.638741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.638769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.638934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.638961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.639134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.639162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.639303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.639331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.639487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.639515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.639675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.639703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.639873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.639899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.640015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.640041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.322 [2024-07-13 07:21:08.640186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.322 [2024-07-13 07:21:08.640214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.322 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.640368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.640396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.640531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.640560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.640709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.640741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.640939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.640965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.641089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.641114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.641282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.641310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.641441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.641468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.641620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.641648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.641804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.641829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.642023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.642050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.642183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.642208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.642378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.642406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.642539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.642568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.642729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.642757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.642924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.642955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.643081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.643107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.643258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.643283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.643442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.643470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.643597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.643625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.643781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.643808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.643944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.643969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.644115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.644140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.644298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.644323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.644456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.644484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.644647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.644677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.644836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.644861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.645019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.645044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.645159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.645202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.645362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.645391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.645554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.645582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.645705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.645733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.645893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.645936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.646056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.646081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.646233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.646258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.646375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.646400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.646590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.646618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.646766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.646794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.646939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.646966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.647085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.647110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.647262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.647287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.647413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.323 [2024-07-13 07:21:08.647438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.323 qpair failed and we were unable to recover it. 00:33:39.323 [2024-07-13 07:21:08.647586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.647615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.647783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.647812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.647967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.647992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.648143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.648168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.648340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.648365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.648486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.648511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.648656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.648681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.648832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.648871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.648988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.649013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.649155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.649181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.649307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.649332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.649479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.649507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.649658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.649689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.649798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.649823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.650028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.650053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.650196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.650221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.650382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.650409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.650530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.650559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.650751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.650777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.650952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.650980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.651135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.651163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.651337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.651362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.651551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.651579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.651729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.651754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.651902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.651928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.652096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.652124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.652298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.652327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.652495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.652520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.652674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.652700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.652825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.652850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.653046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.653072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.653271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.653299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.653490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.653518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.653674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.653702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.653869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.653912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.654089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.654115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.654328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.654353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.654516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.654544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.654680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.654708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.654874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.654900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.655021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.655065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.324 qpair failed and we were unable to recover it. 00:33:39.324 [2024-07-13 07:21:08.655226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.324 [2024-07-13 07:21:08.655254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.655428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.655453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.655648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.655677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.655857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.655903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.656054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.656079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.656254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.656282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.656450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.656476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.656599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.656624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.656777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.656821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.656973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.657002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.657171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.657196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.657389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.657417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.657549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.657577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.657769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.657795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.657968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.657997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.658152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.658188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.658351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.658376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.658537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.658565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.658718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.658743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.658890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.658915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.659109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.659137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.659320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.659364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.659548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.659575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.659741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.659770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.659932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.659963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.660113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.660141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.660301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.660327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.660541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.660592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.660764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.660790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.660927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.660954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.661147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.661173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.661294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.661319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.661449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.661475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.661627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.661672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.661856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.661894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.325 [2024-07-13 07:21:08.662065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.325 [2024-07-13 07:21:08.662091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.325 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.662242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.662272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.662440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.662467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.662587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.662612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.662765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.662792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.662947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.662972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.663117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.663145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.663276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.663304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.663443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.663468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.663613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.663655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.663884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.663924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.664058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.664087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.664255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.664284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.664471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.664500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.664645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.664671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.664818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.664862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.665035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.665065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.665211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.665238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.665390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.665434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.665608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.665637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.665802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.665828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.665960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.666008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.666166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.666196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.666409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.666434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.666559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.666585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.666761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.666806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.666973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.667000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.667129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.667155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.667404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.667434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.667629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.667655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.667858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.667904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.668069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.668098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.668274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.668306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.668447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.668478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.668667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.668696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.668959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.668986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.669110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.669136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.669284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.669309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.669453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.669479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.669625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.669651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.669807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.669833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.326 [2024-07-13 07:21:08.669962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.326 [2024-07-13 07:21:08.669989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.326 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.670137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.670192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.670327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.670356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.670521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.670547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.670668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.670710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.670907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.670936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.671087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.671113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.671290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.671318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.671481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.671510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.671660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.671687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.671837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.671863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.672043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.672072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.672226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.672257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.672397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.672423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.672571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.672598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.672718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.672744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.672914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.672944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.673103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.673132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.673314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.673340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.673511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.673541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.673711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.673739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.673920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.673947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.674121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.674149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.674311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.674341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.674512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.674539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.674685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.674711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.674876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.674902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.675022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.675050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.675167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.675194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.675321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.675348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.675502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.675528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.675645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.675693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.675885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.675929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.676051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.676077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.676200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.676228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.676346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.676372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.676519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.676545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.676707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.676733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.676886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.676912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.677033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.677058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.677180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.327 [2024-07-13 07:21:08.677206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.327 qpair failed and we were unable to recover it. 00:33:39.327 [2024-07-13 07:21:08.677409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.677438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.677590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.677616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.677781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.677823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.677957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.677988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.678162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.678194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.678362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.678392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.678551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.678581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.678753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.678779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.678933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.678960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.679112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.679139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.679277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.679316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.679499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.679544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.679717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.679762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.679917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.679945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.680067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.680095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.680266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.680309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.680458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.680484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.680661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.680705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.680821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.680847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.681015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.681042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.681217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.681260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.681401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.681431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.681645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.681687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.681801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.681827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.681974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.682019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.682155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.682197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.682396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.682440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.682672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.682710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.682895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.682924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.683053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.683080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.683229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.683263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.683395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.683425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.683555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.683584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.683744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.683773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.683948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.683974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.684098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.684125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.684360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.684389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.684580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.684609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.684796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.684825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.684973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.684999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.685128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.328 [2024-07-13 07:21:08.685155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.328 qpair failed and we were unable to recover it. 00:33:39.328 [2024-07-13 07:21:08.685322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.685352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.685574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.685603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.685788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.685816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.685968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.685995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.686170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.686196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.686369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.686398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.686561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.686590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.686814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.686842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.687016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.687042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.687214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.687243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.687432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.687461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.687623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.687653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.687845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.687879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.688024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.688051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.688256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.688285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.688421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.688450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.688659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.688688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.688880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.688923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.689071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.689097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.689237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.689263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.689427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.689455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.689644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.689673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.689840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.689870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.690024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.690050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.690255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.690283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.690441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.690471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.690641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.690669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.690805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.690834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.690994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.691020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.691216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.691249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.691440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.691469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.691655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.691683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.691835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.691864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.692025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.692052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.692203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.692229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.692402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.692431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.692593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.692621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.329 qpair failed and we were unable to recover it. 00:33:39.329 [2024-07-13 07:21:08.692786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.329 [2024-07-13 07:21:08.692816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.692967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.692994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.693145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.693189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.693371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.693400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.693590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.693618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.693788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.693817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.693971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.693998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.694189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.694217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.694385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.694429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.694636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.694664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.694851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.694886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.695056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.695083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.695264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.695290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.695434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.695463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.695627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.695656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.695821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.695847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.695973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.695999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.696149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.696174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.696378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.696407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.696577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.696606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.696799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.696827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.697019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.697058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.697188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.697216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.697424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.697468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.697727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.697776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.697972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.697998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.698193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.698239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.698402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.698444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.698696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.698748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.698911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.698937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.699099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.699146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.699317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.699345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.699561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.699608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.699758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.699783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.699955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.700000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.700158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.700203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.700361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.700404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.700586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.700613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.700789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.700814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.330 qpair failed and we were unable to recover it. 00:33:39.330 [2024-07-13 07:21:08.700966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.330 [2024-07-13 07:21:08.701013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.701212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.701240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.701402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.701446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.701597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.701621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.701772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.701799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.701957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.702001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.702173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.702202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.702381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.702424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.702576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.702601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.702722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.702748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.702878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.702904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.703042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.703086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.703246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.703288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.703420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.703448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.703617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.703642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.703824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.703849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.704066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.704110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.704285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.704329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.704616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.704674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.704820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.704845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.705022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.705071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.705253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.705282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.705442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.705471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.705662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.705687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.705839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.705871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.706042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.706084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.706202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.706229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.706433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.706477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.706600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.331 [2024-07-13 07:21:08.706625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.331 qpair failed and we were unable to recover it. 00:33:39.331 [2024-07-13 07:21:08.706745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.706770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.706932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.706977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.707139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.707182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.707382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.707418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.707560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.707606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.707841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.707907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.708069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.708095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.708266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.708294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.708459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.708489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.708627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.708655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.708850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.708884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.709035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.709060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.709229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.709268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.709482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.709511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.709767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.709819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.709975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.710001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.710145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.710173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.710415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.710465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.710629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.710657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.710798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.710823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.710974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.711000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.711148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.711174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.711353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.711381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.711564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.711592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.711753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.711782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.711922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.711948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.712091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.712116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.712267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.712311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.712489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.712519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.712665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.712690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.712815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.712840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.713002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.713029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.713229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.713272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.713444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.713475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.713662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.713692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.713857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.713893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.714035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.714062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.615 qpair failed and we were unable to recover it. 00:33:39.615 [2024-07-13 07:21:08.714234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.615 [2024-07-13 07:21:08.714263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.714424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.714453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.714609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.714638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.714829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.714855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.715010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.715036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.715175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.715203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.715360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.715389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.715551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.715579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.715721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.715764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.715940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.715969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.716123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.716148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.716338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.716365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.716615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.716665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.716837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.716875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.717045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.717071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.717228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.717252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.717407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.717448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.717580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.717611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.717759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.717785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.717928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.717955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.718075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.718102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.718248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.718283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.718445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.718475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.718646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.718672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.718796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.718822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.719001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.719027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.719203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.719232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.719419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.719449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.719611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.719640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.719807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.719838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.720052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.720091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.720247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.720274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.720429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.720473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.720618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.720660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.720784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.720811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.720976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.721003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.721129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.721156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.721335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.721360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.721501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.721543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.721691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.721716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.616 [2024-07-13 07:21:08.721883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.616 [2024-07-13 07:21:08.721922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.616 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.722055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.722082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.722257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.722285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.722467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.722518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.722793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.722844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.723043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.723069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.723273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.723323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.723495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.723549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.723748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.723808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.723968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.723994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.724166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.724207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.724384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.724427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.724584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.724613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.724800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.724828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.724979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.725006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.725155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.725180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.725343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.725396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.725565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.725609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.725816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.725844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.726030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.726069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.726254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.726284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.726475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.726504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.726697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.726726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.726924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.726951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.727103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.727129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.727254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.727297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.727435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.727465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.727656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.727685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.727834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.727883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.728102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.728132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.728325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.728395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.728649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.728700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.728861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.728891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.729074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.729099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.729302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.729352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.729488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.729516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.729726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.729774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.729953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.729980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.730107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.730132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.730298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.730340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.730563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.730616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.617 [2024-07-13 07:21:08.730749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.617 [2024-07-13 07:21:08.730777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.617 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.730944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.730971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.731132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.731157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.731300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.731326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.731514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.731576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.731737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.731765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.731946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.731972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.732088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.732113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.732310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.732337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.732525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.732575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.732737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.732765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.732906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.732932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.733078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.733103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.733242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.733267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.733443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.733494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.733651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.733676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.733826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.733850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.734034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.734059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.734260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.734288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.734615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.734669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.734838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.734875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.735041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.735070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.735195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.735221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.735387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.735415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.735573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.735600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.735760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.735788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.735974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.736014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.736133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.736179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.736341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.736370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.736562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.736591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.736723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.736753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.736892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.736918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.737094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.737119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.737290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.737319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.737507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.737535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.737739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.737768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.737951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.737977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.738099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.738124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.738248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.738290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.738424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.738452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.738678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.738706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.618 [2024-07-13 07:21:08.738872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.618 [2024-07-13 07:21:08.738917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.618 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.739039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.739066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.739206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.739235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.739393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.739422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.739609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.739638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.739826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.739870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.740008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.740035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.740215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.740258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.740455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.740483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.740762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.740812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.740953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.740979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.741221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.741264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.741468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.741525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.741706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.741732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.741853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.741890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.742088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.742131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.742272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.742313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.742452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.742496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.742688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.742727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.742852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.742884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.743059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.743093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.743270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.743299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.743449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.743477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.743625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.743653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.743826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.743855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.743990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.744016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.744158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.744201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.744388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.744415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.744591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.744634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.744811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.744837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.745015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.745043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.745207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.619 [2024-07-13 07:21:08.745235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.619 qpair failed and we were unable to recover it. 00:33:39.619 [2024-07-13 07:21:08.745373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.745401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.745562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.745590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.745758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.745786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.745925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.745951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.746099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.746123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.746296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.746324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.746525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.746576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.746748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.746782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.746956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.746982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.747132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.747174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.747343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.747404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.747570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.747598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.747720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.747748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.747918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.747944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.748061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.748086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.748279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.748311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.748548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.748576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.748752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.748778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.748930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.748955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.749075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.749100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.749228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.749270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.749483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.749535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.749718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.749746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.749885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.749928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.750079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.750104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.750266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.750296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.750461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.750489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.750658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.750686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.750828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.750853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.751022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.751061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.751248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.751277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.751429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.751456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.751649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.751700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.751881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.620 [2024-07-13 07:21:08.751907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.620 qpair failed and we were unable to recover it. 00:33:39.620 [2024-07-13 07:21:08.752022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.752047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.752191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.752234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.752411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.752457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.752676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.752730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.752874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.752917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.753068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.753094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.753246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.753271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.753442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.753495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.753738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.753794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.753992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.754017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.754185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.754213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.754434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.754482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.754764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.754815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.754996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.755021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.755143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.755167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.755377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.755401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.755546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.755574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.755735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.755763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.755932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.755958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.756107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.756152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.756314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.756342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.756537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.756590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.756761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.756789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.756952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.756978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.757131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.757156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.757353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.757381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.757543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.757570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.757727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.757755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.757937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.757962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.758103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.758128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.758314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.758339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.758499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.758528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.758679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.758705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.758888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.758935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.759088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.759113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.759297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.759324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.759508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.759532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.759722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.759749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.621 [2024-07-13 07:21:08.759929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.621 [2024-07-13 07:21:08.759956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.621 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.760101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.760127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.760289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.760316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.760504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.760531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.760682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.760706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.760871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.760897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.761044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.761068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.761227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.761252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.761365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.761390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.761539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.761573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.761699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.761728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.761892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.761921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.762087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.762111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.762234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.762259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.762429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.762454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.762597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.762625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.762800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.762827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.763036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.763061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.763234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.763276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.763459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.763485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.763636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.763660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.763834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.763858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.764050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.764076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.764255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.764283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.764456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.764484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.764686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.764710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.764883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.764926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.765076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.765102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.765258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.765288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.765450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.765478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.765680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.765705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.765850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.765893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.766031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.766056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.766227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.766251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.766444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.766472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.766607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.766635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.766814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.766839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.766980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.767007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.767160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.767186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.767340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.767365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.767516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.767541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-07-13 07:21:08.767686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.622 [2024-07-13 07:21:08.767710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.767846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.767881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.768016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.768041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.768187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.768211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.768395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.768420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.768546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.768572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.768762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.768790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.768938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.768963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.769089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.769114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.769236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.769265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.769414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.769439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.769596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.769621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.769756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.769783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.769949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.769974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.770117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.770159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.770351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.770377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.770529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.770553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.770702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.770727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.770841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.770871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.770992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.771016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.771159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.771204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.771344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.771373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.771567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.771592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.771773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.771816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.772002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.772027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.772183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.772208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.772374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.772402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.772541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.772568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.772740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.772764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.772895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.772930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.773050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.773076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.773198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.773222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.773371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.773395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.773513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.773537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.773699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.773725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-07-13 07:21:08.773877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.623 [2024-07-13 07:21:08.773919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.774073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.774098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.774252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.774276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.774400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.774425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.774597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.774643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.774813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.774839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.774997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.775021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.775167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.775194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.775336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.775360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.775512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.775555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.775737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.775765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.775942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.775968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.776092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.776116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.776249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.776274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.776422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.776451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.776647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.776675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.776884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.776909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.777032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.777056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.777200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.777234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.777357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.777397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.777574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.777597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.777774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.777798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.777988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.778013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.778165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.778189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.778304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.778329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.778509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.778551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.778750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.778775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.778930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.778955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.779132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.779157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.779306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.779331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.779456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.779480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.779652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.779695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.779882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.779927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.780069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.780094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.780264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.780292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.780440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.780465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.780640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.780664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.780813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.780842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.781013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.781038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-07-13 07:21:08.781164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.624 [2024-07-13 07:21:08.781188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.781316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.781342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.781529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.781554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.781702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.781728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.781887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.781913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.782032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.782057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.782200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.782224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.782403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.782432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.782576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.782601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.782762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.782787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.782941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.782965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.783111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.783136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.783285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.783327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.783521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.783545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.783730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.783755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.783948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.783977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.784130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.784155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.784344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.784369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.784544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.784602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.784789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.784816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.785010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.785035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.785161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.785185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.785308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.785333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.785509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.785533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.785703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.785730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.785951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.785977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.786116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.786141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.786313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.786337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.786462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.786486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.786677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.786708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.786856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.786886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.787060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.787085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.787196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.787221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.787346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.787370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.787522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.787546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.787692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.787717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.787832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.787856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.788032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.788057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.788178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.788202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.788377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.788401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.788546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.625 [2024-07-13 07:21:08.788571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.625 qpair failed and we were unable to recover it. 00:33:39.625 [2024-07-13 07:21:08.788720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.788744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.788958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.788983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.789131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.789156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.789344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.789371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.789572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.789600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.789785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.789812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.790008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.790034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.790158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.790183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.790356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.790381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.790556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.790580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.790731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.790771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.790963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.790989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.791138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.791164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.791305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.791335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.791528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.791555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.791695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.791720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.791848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.791877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.792026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.792050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.792234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.792259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.792474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.792500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.792672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.792696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.792817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.792843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.792997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.793036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.793220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.793249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.793421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.793446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.793598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.793623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.793808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.793837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.794016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.794042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.794215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.794270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.794462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.794490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.794657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.794682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.794804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.794849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.795000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.795026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.795146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.795171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.795373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.795425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.795617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.795644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.795807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.795832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.796024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.796063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.796242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.796271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.796434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.796460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.796632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.626 [2024-07-13 07:21:08.796660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.626 qpair failed and we were unable to recover it. 00:33:39.626 [2024-07-13 07:21:08.796814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.796848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.797061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.797087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.797270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.797298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.797459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.797487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.797631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.797656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.797779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.797804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.797986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.798011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.798127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.798152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.798297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.798342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.798468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.798496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.798665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.798690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.798850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.798891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.799026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.799052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.799171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.799196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.799349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.799393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.799568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.799593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.799769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.799794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.799922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.799948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.800096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.800122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.800331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.800356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.800522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.800576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.800766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.800794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.800963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.800991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.801136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.801177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.801387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.801412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.801536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.801561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.801748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.801776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.801931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.801957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.802105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.802130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.802305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.802339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.803200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.803229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.803421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.803448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.803618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.803646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.803819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.803845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.803997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.804023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.804165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.804198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.804316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.804342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.804526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.804551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.804677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.804718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.804873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.804903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.805042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.805067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.627 qpair failed and we were unable to recover it. 00:33:39.627 [2024-07-13 07:21:08.805246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.627 [2024-07-13 07:21:08.805290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.805453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.805483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.805635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.805661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.805813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.805839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.806022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.806047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.806192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.806219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.806444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.806497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.806719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.806766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.806942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.806969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.807094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.807119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.807264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.807293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.807435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.807461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.807607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.807649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.807808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.807842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.808025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.808050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.808193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.808222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.808411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.808439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.808581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.808605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.808732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.808758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.808905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.808930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.809075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.809099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.809255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.809298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.809481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.809508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.809670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.809698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.809896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.809921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.810095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.810121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.810307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.810332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.810489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.810514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.810639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.810664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.810815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.810839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.810987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.811012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.811162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.811186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.811336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.811362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.811516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.811572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.811734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.628 [2024-07-13 07:21:08.811762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.628 qpair failed and we were unable to recover it. 00:33:39.628 [2024-07-13 07:21:08.811936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.811961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.812075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.812100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.812281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.812307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.812486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.812511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.812699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.812727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.812928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.812954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.813105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.813131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.813304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.813333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.813493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.813520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.813669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.813695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.813850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.813881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.814034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.814059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.814186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.814212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.814404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.814433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.814592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.814620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.814843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.814889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.815056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.815081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.815205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.815230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.815374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.815407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.815526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.815551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.815702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.815727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.815901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.815927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.816046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.816072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.816264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.816293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.816434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.816459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.816574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.816600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.816727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.816752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.816900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.816925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.817055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.817080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.817258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.817287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.817427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.817452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.817608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.817650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.817827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.817856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.818078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.818104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.818252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.818280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.818455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.818480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.818626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.818651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.818798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.818826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.819029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.819055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.819183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.819208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.629 [2024-07-13 07:21:08.819383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.629 [2024-07-13 07:21:08.819408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.629 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.819557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.819585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.819741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.819768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.819948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.819973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.820121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.820146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.820367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.820393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.820539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.820564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.820715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.820758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.820909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.820935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.821084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.821109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.821274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.821304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.821473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.821497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.821623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.821650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.821780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.821805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.821979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.822004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.822116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.822141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.822346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.822374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.822550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.822575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.822766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.822797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.822966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.822992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.823149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.823174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.823348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.823372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.823520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.823548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.823722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.823748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.823956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.823983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.824129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.824154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.824358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.824383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.824525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.824568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.824731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.824759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.824908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.824934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.825058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.825083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.825260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.825287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.825457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.825483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.825676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.825703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.825905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.825931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.826073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.826098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.826251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.826276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.826480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.826508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.826678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.826704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.826826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.826851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.827015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.827040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.630 qpair failed and we were unable to recover it. 00:33:39.630 [2024-07-13 07:21:08.827157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.630 [2024-07-13 07:21:08.827183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.827357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.827397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.827584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.827611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.827749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.827775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.827939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.827982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.828121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.828148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.828289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.828315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.828492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.828535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.828698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.828726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.828901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.828926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.829051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.829076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.829230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.829255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.829365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.829390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.829512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.829538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.829692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.829718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.829902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.829928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.830099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.830143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.830288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.830315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.830462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.830487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.830637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.830679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.830874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.830902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.831082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.831108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.831298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.831326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.831497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.831522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.831667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.831693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.831863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.831899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.832069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.832097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.832263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.832288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.832438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.832481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.832659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.832684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.832829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.832855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.833012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.833040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.833170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.833198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.833347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.833372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.833520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.833546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.833689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.833715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.833830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.833855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.834060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.834089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.834225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.834253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.834399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.834424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.834573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.631 [2024-07-13 07:21:08.834598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.631 qpair failed and we were unable to recover it. 00:33:39.631 [2024-07-13 07:21:08.834742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.834785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.834964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.834990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.835115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.835170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.835368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.835400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.835576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.835601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.835795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.835823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.836002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.836030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.836209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.836234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.836372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.836397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.836519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.836544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.836748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.836777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.836951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.836976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.837102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.837127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.837315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.837340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.837503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.837531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.837719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.837744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.837889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.837915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.838086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.838114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.838277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.838305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.838470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.838495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.838644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.838670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.838847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.838881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.839081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.839106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.839319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.839347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.839493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.839520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.839672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.839697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.839847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.839895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.840027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.840055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.840192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.840217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.840363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.840402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.840566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.840595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.840766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.840790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.840917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.840958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.841093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.841121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.632 qpair failed and we were unable to recover it. 00:33:39.632 [2024-07-13 07:21:08.841324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.632 [2024-07-13 07:21:08.841349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.841490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.841517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.841646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.841673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.841813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.841838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.841962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.841987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.842157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.842182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.842366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.842391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.842524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.842552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.842741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.842768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.842964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.842993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.843120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.843145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.843304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.843332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.843525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.843549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.843740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.843767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.843925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.843953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.844124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.844148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.844295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.844337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.844495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.844522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.844665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.844690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.844864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.844912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.845077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.845104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.845252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.845277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.845422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.845465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.845630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.845657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.845827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.845851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.846049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.846078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.846268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.846295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.846446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.846472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.846620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.846663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.846809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.846834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.846991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.847016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.847204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.847232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.847399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.847427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.847595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.847620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.847819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.847847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.848021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.848046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.848204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.848230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.848375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.848417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.848563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.848592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.848771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.848796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.848947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.633 [2024-07-13 07:21:08.848975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.633 qpair failed and we were unable to recover it. 00:33:39.633 [2024-07-13 07:21:08.849133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.849160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.849327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.849351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.849476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.849502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.849627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.849652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.849826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.849851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.849976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.850001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.850149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.850174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.850321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.850348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.850542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.850574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.850714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.850742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.850915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.850939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.851089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.851115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.851269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.851294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.851443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.851467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.851587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.851612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.851757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.851782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.851914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.851940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.852140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.852168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.852353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.852380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.852522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.852546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.852665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.852691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.852872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.852901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.853043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.853067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.853217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.853257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.853398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.853427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.853593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.853619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.853765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.853806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.853939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.853966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.854173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.854198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.854363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.854391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.854563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.854588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.854739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.854764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.854909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.854937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.855102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.855130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.855280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.855306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.855453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.855478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.855649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.855679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.855876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.855918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.856068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.856093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.856237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.856264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.634 [2024-07-13 07:21:08.856457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.634 [2024-07-13 07:21:08.856482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.634 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.856644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.856672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.856837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.856862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.857015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.857039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.857157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.857182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.857330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.857356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.857482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.857506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.857660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.857685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.857820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.857852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.858032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.858057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.858172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.858214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.858374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.858401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.858567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.858591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.858852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.858886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.859029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.859057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.859230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.859255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.859382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.859406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.859532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.859557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.859708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.859732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.859898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.859926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.860070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.860098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.860292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.860316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.860517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.860545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.860680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.860708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.860886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.860912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.861037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.861063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.861187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.861211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.861357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.861382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.861522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.861546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.861686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.861713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.861930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.861956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.862100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.862125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.862245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.862268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.862418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.862443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.862562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.862586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.862758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.862782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.862914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.862940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.863089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.863113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.863245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.863270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.863443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.863468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.863613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.863638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.635 qpair failed and we were unable to recover it. 00:33:39.635 [2024-07-13 07:21:08.863763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.635 [2024-07-13 07:21:08.863789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.863938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.863963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.864084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.864108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.864272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.864297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.864416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.864440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.864609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.864634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.864781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.864805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.864935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.864964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.865087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.865111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.865260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.865284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.865439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.865465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.865636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.865661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.865814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.865839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.866007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.866033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.866146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.866171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.866313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.866337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.866454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.866479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.866648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.866672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.866789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.866813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.866967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.866993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.867129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.867153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.867310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.867335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.867455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.867482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.867606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.867631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.867778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.867803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.867960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.867985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.868114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.868138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.868251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.868276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.868394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.868418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.868534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.868559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.868679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.868704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.868829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.868855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.869013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.869038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.869189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.869213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.636 qpair failed and we were unable to recover it. 00:33:39.636 [2024-07-13 07:21:08.869365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.636 [2024-07-13 07:21:08.869390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.869536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.869561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.869711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.869736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.869851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.869889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.870019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.870043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.870200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.870224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.870375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.870400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.870568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.870596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.870731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.870758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.870915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.870940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.871099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.871127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.871293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.871320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.871501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.871526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.871711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.871743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.871933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.871961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.872133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.872159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.872288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.872313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.872480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.872508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.872679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.872705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.872868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.872894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.873020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.873046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.873170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.873195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.873360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.873385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.873573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.873601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.873742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.873767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.873941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.873967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.874119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.874144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.874310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.874334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.874504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.874537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.874709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.874735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.874935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.874960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.875107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.875132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.875258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.875283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.875429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.875454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.875574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.875599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.875781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.875811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.875986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.876013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.876204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.876232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.876404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.876432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.876597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.876629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.876792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.637 [2024-07-13 07:21:08.876835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.637 qpair failed and we were unable to recover it. 00:33:39.637 [2024-07-13 07:21:08.877017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.877046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.877187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.877214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.877341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.877367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.877534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.877578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.877726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.877751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.877916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.877941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.878069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.878095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.878266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.878291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.878416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.878441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.878566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.878592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.878747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.878776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.878926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.878953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.879102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.879132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.879252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.879278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.879427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.879472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.879612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.879637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.879797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.879823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.879982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.880008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.880152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.880177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.880305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.880331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.880457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.880483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.880600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.880625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.880805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.880831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.880974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.880999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.881126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.881152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.881285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.881311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.881455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.881480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.881624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.881650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.881773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.881799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.881964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.881990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.882133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.882158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.882320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.882346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.882495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.882521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.882681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.882706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.882885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.882911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.883057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.883082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.883204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.883229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.883379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.883404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.883557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.883582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.883756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.883781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.638 [2024-07-13 07:21:08.883943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.638 [2024-07-13 07:21:08.883968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.638 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.884088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.884114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.884271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.884296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.884448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.884473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.884590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.884616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.884769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.884794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.884933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.884958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.885081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.885107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.885242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.885266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.885417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.885443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.885568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.885594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.885733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.885762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.885938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.885969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.886111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.886136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.886282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.886306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.886436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.886461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.886613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.886639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.886770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.886796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.886943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.886969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.887114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.887139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.887262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.887287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.887430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.887458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.887673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.887714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.887863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.887896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.888026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.888052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.888174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.888200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.888334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.888361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.888520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.888545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.888668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.888694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.888814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.888838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.888987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.889013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.889159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.889192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.889315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.889340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.889491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.889516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.889632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.889658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.889780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.889806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.889965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.889991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.890132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.890156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.890305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.890330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.890483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.890510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.890666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.890691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.639 [2024-07-13 07:21:08.890807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-13 07:21:08.890832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.639 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.890963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.890990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.891572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.891602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.891783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.891812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.891970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.891997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.892146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.892170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.892312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.892336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.892476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.892505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.892661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.892688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.892852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.892891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.893038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.893063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.893187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.893216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.893347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.893372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.893525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.893553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.893700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.893725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.893850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.893890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.894014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.894040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.894158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.894183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.894338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.894363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.894530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.894556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.894703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.894728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.894870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.894896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.895048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.895075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.895199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.895224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.895375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.895401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.895531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.895557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.895731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.895756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.895885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.895911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.896036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.896061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.896210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.896246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.896395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.896420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.896572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.896598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.896714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.896739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.896870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.896896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.897031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.897056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.897174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.897199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.897327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.897354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.897486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.897511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.897657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.897683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.897830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.897856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.898009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.898035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.640 qpair failed and we were unable to recover it. 00:33:39.640 [2024-07-13 07:21:08.898156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-13 07:21:08.898181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.898307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.898331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.898468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.898494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.898615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.898641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.898773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.898801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.898942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.898969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.899093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.899119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.899277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.899318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.899462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.899491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.899657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.899683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.899831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.899880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.900008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.900035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.903883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.903938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.904106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.904135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.904281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.904310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.904465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.904491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.904636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.904665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.904823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.904849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.904996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.905021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.905149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.905175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.905330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.905357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.905505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.905530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.905702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.905741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.905892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.905921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.906072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.906098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.906230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.906256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.906369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.906395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.906585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.906611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.906738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.906765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.906921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.906948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.907073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.907099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.907251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.907277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.907405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.907431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.907592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.907618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.907766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-13 07:21:08.907792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.641 qpair failed and we were unable to recover it. 00:33:39.641 [2024-07-13 07:21:08.907947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.907974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.908118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.908144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.908304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.908330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.908455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.908482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.908666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.908692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.908810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.908836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.908974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.909001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.909109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.909135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.909249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.909275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.909389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.909414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.909538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.909564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.909740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.909766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.909887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.909914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.910037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.910062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.910180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.910207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.910329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.910360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.910508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.910534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.910682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.910707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.910829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.910856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.910985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.911011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.911128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.911153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.911319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.911345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.911490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.911516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.911636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.911661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.911816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.911842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.911997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.912022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.912180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.912204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.912327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.912353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.912505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.912530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.912670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.912695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.912832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.912857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.912983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.913008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.913167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.913192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.913348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.913374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.913496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.913523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.913892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.913933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.914102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.914130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.914716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.914747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.914914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.914942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.915092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.642 [2024-07-13 07:21:08.915119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.642 qpair failed and we were unable to recover it. 00:33:39.642 [2024-07-13 07:21:08.915268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.915294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.915423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.915452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.915626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.915676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.915806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.915834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.915997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.916025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.916185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.916213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.916392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.916422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.916633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.916676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.916847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.916881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.917057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.917083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.917260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.917286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.917434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.917459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.917632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.917657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.917798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.917823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.917978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.918004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.918142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.918167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.918305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.918362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.918578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.918637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.918806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.918831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.919014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.919040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.919185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.919210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.919404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.919453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.919620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.919653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.919839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.919871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.919987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.920012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.920131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.920172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.920362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.920390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.920586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.920637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.920777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.920802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.921016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.921062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.921219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.921246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.921416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.921445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.921605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.921634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.921835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.921862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.921997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.922023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.922180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.922206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.922360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.922386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.922588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.922616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.922783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.922809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.922940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.922966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.923085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.923109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.643 qpair failed and we were unable to recover it. 00:33:39.643 [2024-07-13 07:21:08.923270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.643 [2024-07-13 07:21:08.923295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.923441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.923466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.923658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.923697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.923829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.923856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.924023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.924050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.924188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.924217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.924410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.924439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.924592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.924618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.924770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.924796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.924952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.924979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.925100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.925126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.925250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.925276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.926164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.926194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.926381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.926424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.926581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.926611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.926786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.926818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.926977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.927005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.927158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.927204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.927357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.927383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.927504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.927547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.928576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.928610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.928859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.928911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.929045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.929071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.929235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.929264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.929427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.929458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.929608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.929637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.929828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.929857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.930009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.930035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.930158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.930194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.930337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.930364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.930507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.930537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.930732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.930760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.930888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.930932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.931051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.931078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.931278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.931318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.931448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.931474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.931701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.931748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.931886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.931930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.932078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.932103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.932227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.932270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.932435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.644 [2024-07-13 07:21:08.932464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.644 qpair failed and we were unable to recover it. 00:33:39.644 [2024-07-13 07:21:08.932625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.932672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.932829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.932854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.932986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.933012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.933161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.933197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.933357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.933404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.933565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.933593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.933754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.933783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.933937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.933962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.934081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.934107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.934254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.934281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.934498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.934550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.934680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.934712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.934879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.934909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.935051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.935077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.935259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.935285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.935408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.935434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.935594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.935623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.935770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.935801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.935965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.935992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.936113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.936137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.936259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.936300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.936461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.936490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.936649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.936677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.936833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.936862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.937011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.937038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.937178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.937208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.940881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.940936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.941117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.941151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.941289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.941317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.941480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.941507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.941675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.941700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.941853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.941894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.942023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.942048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.942169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.942195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.942359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.645 [2024-07-13 07:21:08.942386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.645 qpair failed and we were unable to recover it. 00:33:39.645 [2024-07-13 07:21:08.942558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.942589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.942735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.942764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.942942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.942970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.943128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.943172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.943342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.943371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.943555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.943588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.943839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.943874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.944026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.944053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.944195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.944228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.944487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.944532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.944706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.944737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.944974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.945002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.945125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.945151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.945341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.945370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.945545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.945572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.945796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.945822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.945998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.946024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.946255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.946281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.946529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.946580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.946737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.946767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.946943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.946970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.947093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.947120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.947275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.947301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.947426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.947453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.947644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.947673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.947841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.947884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.948059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.948086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.948267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.948293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.948422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.948449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.948599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.948625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.948851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.948895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.949025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.949052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.949206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.949237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.949362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.949388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.949535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.949561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.949691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.949718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.949874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.949900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.950750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.950792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.951012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.951040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.951212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.951244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.646 [2024-07-13 07:21:08.951415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.646 [2024-07-13 07:21:08.951443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.646 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.951608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.951634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.951780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.951806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.951960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.951987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.952157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.952195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.952329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.952355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.952491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.952517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.952922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.952955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.953109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.953138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.953272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.953298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.953453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.953480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.953601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.953629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.953745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.953772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.953906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.953932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.954053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.954082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.954263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.954289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.954412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.954438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.954563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.954590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.954742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.954768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.954907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.954935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.955070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.955097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.955218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.955252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.955401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.955427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.955554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.955580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.955718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.955744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.955884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.955911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.956034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.956061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.956182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.956208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.956325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.956351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.956459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.956485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.956636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.956673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.956819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.956845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.956975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.957006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.957151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.957178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.957314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.957341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.957495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.957521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.957691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.957718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.957835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.957861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.958005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.958031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.958178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.958203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.958363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.958390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.647 [2024-07-13 07:21:08.958538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.647 [2024-07-13 07:21:08.958571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.647 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.958752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.958777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.958933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.958959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.959084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.959111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.959263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.959288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.959466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.959491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.959675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.959701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.959842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.959874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.960001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.960027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.960202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.960227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.961134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.961181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.961385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.961412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.961585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.961614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.961782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.961824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.962002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.962028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.962182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.962223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.962380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.962407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.962556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.962582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.962776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.962804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.962962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.962989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.963112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.963140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.963268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.963293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.963442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.963467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.963620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.963649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.963813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.963838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.963983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.964009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.964138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.964165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.964302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.964328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.964458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.964488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.964648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.964676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.964840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.964875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.965052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.965081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.965259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.965287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.965449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.965479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.965684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.965711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.965823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.965849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.966017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.966042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.966166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.966193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.966339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.966367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.966526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.966566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.966757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.648 [2024-07-13 07:21:08.966783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.648 qpair failed and we were unable to recover it. 00:33:39.648 [2024-07-13 07:21:08.966936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.966965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.967123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.967151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.967304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.967336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.967505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.967531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.967680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.967706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.967825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.967851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.967976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.968001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.968128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.968154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.968278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.968312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.968475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.968500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.968646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.968672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.968824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.968850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.969033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.969058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.969207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.969232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.969360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.969387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.969567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.969592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.969741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.969765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.969927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.969954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.970082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.970109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.970269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.970295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.970473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.970502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.970691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.970720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.970930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.970956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.971088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.971112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.971262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.971303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.971493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.971522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.971716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.971741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.971893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.971919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.972041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.972066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.972189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.972215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.972365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.972395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.972518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.972544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.972692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.972718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.972864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.972920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.973048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.973075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.649 qpair failed and we were unable to recover it. 00:33:39.649 [2024-07-13 07:21:08.973306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.649 [2024-07-13 07:21:08.973351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.973480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.973506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.973654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.973680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.973819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.973844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.973979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.974005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.974154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.974181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.974342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.974368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.974562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.974590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.974734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.974760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.974929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.974956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.975101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.975127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.975283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.975310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.975478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.975508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.975704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.975733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.975917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.975945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.976097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.976123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.976255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.976281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.976478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.976508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.976669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.976697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.976859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.976895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.977032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.977058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.977253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.977282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.977440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.977468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.977693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.977721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.977899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.977926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.978076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.978104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.978280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.978310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.978560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.978612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.978824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.978852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.979003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.979031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.979421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.979454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.979637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.979667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.979836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.979864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.980251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.980281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.980491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.980521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.980693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.980724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.980881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.980908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.981058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.981085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.981208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.981235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.981408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.981445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.650 [2024-07-13 07:21:08.981586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.650 [2024-07-13 07:21:08.981611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.650 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.981735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.981761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.981914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.981942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.982091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.982116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.982945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.982975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.983128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.983171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.983314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.983340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.983488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.983513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.983695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.983722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.984182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.984212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.984401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.984428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.984637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.984663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.985219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.985254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.985491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.985522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.985657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.985693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.985819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.985846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.985975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.986002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.986179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.986209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.986385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.986414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.986612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.986643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.986775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.986801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.986981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.987009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.987203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.987246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.987531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.987561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.987722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.987748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.987876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.987902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.988032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.988057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.988230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.988256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.988412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.988437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.988602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.988628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.988776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.988800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.988961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.988987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.989119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.989145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.989292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.989317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.989466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.989491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.989625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.989655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.989798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.989822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.990000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.990038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.990180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.990209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.990341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.990367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.990521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.990546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.651 qpair failed and we were unable to recover it. 00:33:39.651 [2024-07-13 07:21:08.990705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.651 [2024-07-13 07:21:08.990731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.990849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.990881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.991054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.991080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.991224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.991249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.991411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.991439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.991612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.991637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.991790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.991816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.991989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.992025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.992190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.992237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.992530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.992583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.992736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.992762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.992889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.992915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.993036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.993065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.993218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.993243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.993401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.993427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.993554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.993580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.993732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.993758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.993927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.993954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.994103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.994128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.994282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.994307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.994447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.994475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.994713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.994743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.994890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.994917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.995049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.995075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.995257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.995283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.995410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.995437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.995619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.995644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.995767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.995792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.995941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.995968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.996117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.996143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.996276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.996304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.996431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.996457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.996621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.996647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.996791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.996816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.996954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.996980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.997109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.997135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.997301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.997329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.997517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.997545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.997694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.997737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.998521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.998556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.998762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.998793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.652 qpair failed and we were unable to recover it. 00:33:39.652 [2024-07-13 07:21:08.998987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.652 [2024-07-13 07:21:08.999013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:08.999131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:08.999156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:08.999303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:08.999329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:08.999516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:08.999549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:08.999739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:08.999772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:08.999904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:08.999931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.000053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.000078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.000204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.000234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.000424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.000470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.000639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.000665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.000792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.000817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.000952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.000978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.001115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.001141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.001294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.001319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.001477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.001505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.001673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.001698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.001849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.001885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.002027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.002052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.002179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.002204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.002354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.002380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.002589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.002614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.002770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.002795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.002930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.002956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.003086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.003112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.003253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.003278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.003453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.003503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.003675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.003701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.003824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.003851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.003982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.004007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.004156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.004181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.004336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.004361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.004473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.004517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.004658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.004686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.004860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.004894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.005040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.005069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.005188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.005214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.005368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.005393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.005545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.005585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.005753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.005781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.005937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.005963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.653 qpair failed and we were unable to recover it. 00:33:39.653 [2024-07-13 07:21:09.006108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.653 [2024-07-13 07:21:09.006152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.006365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.006411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.006575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.006608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.006756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.006784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.006972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.006999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.007114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.007139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.007267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.007292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.007493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.007539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.007711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.007740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.007923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.007949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.008080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.008105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.008269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.008294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.008445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.008488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.008684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.008713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.008882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.008908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.009033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.009058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.009241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.009282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.009498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.009527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.009694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.009735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.009874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.009919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.010044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.010069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.010194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.010234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.010406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.010434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.010607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.010635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.010783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.010808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.010940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.010967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.011090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.011115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.011233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.011258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.011464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.011493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.011624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.011652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.011798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.011823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.011975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.012001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.012153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.012178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.012304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.012329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.012491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.012520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.012716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.654 [2024-07-13 07:21:09.012745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.654 qpair failed and we were unable to recover it. 00:33:39.654 [2024-07-13 07:21:09.012921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.012947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.013074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.013099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.013244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.013269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.013408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.013437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.013591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.013619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.013755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.013780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.013926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.013951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.014074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.014100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.014243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.014269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.014439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.014465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.014646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.014671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.014825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.014851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.014982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.015007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.015121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.015146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.015281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.015307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.015453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.015479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.015632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.015657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.015780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.015806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.015958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.015998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.016132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.016159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.016286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.016313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.016436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.016462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.016636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.016681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.016808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.016834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.016968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.016995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.017118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.017144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.017328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.017355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.017562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.017605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.018351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.018382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.018624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.018671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.019528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.019559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.019748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.019775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.020480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.020510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.020690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.020716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.020899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.020928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.021085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.021112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.021278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.021303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.021443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.021498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.021677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.021703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.021828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.021872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.022041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.022067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.022210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.022254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.655 qpair failed and we were unable to recover it. 00:33:39.655 [2024-07-13 07:21:09.022387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.655 [2024-07-13 07:21:09.022413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.022566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.022592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.022763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.022788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.022916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.022942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.023070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.023096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.023281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.023306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.023481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.023507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.023677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.023717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.023840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.023874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.024034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.024061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.024237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.024266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.024486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.024512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.024636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.024663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.024813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.024839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.024966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.024992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.025108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.025135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.025299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.025326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.025447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.025475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.025615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.025644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.025812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.025840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.025974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.026001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.026137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.026163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.026286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.026312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.026483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.026526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.026704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.026735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.026863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.026897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.027035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.027060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.027851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.027893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.028045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.028071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.028196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.028223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.028408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.028437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.028632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.028675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.028832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.028857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.029012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.029037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.029163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.029188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.029307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.029332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.029487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.029513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.029681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.029706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.029823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.029849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.029995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.030020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.030148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.030191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.030393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.030422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.030613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.030642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.030807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.030833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.656 qpair failed and we were unable to recover it. 00:33:39.656 [2024-07-13 07:21:09.030971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.656 [2024-07-13 07:21:09.030997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.031112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.031138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.031284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.031309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.031450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.031497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.031667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.031693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.031836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.031862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.032030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.032055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.032179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.032209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.032365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.032390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.032562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.032587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.032711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.032738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.032861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.032893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.033017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.033042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.033167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.033193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.033337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.033362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.033505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.033531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.033677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.033721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.033889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.033915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.034052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.034078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.034247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.034276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.034413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.034442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.034570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.034598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.034784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.034810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.034940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.034966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.035092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.035118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.035270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.035296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.035476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.035501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.035614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.035639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.035821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.035879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.036048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.036074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.036240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.036268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.036404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.036432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.036622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.036650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.036786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.036815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.036982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.037008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.037137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.037161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.037355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.037389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.037570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.037599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.037760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.037785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.037927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.037953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.038071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.038098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.038223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.038248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.038415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.038442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.038598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.038641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.038832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.657 [2024-07-13 07:21:09.038857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.657 qpair failed and we were unable to recover it. 00:33:39.657 [2024-07-13 07:21:09.038992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.039017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.039162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.039186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.039970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.040008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.040189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.040226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.040439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.040469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.040711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.040742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.040895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.040924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.041050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.041076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.041197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.041238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.041440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.041469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.041656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.041683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.041824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.041859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.042015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.042040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.042237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.042287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.042473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.042499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.042625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.042651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.042778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.042808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.042966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.042993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.043146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.043173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.043324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.043357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.043520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.043549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.043711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.043741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.043892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.043936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.044091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.044118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.044279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.044305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.658 [2024-07-13 07:21:09.044455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.658 [2024-07-13 07:21:09.044484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.658 qpair failed and we were unable to recover it. 00:33:39.659 [2024-07-13 07:21:09.044689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.659 [2024-07-13 07:21:09.044719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.659 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.044884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.044912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.045063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.045090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.045269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.045296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.045476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.045505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.045671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.045700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.045844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.045880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.046026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.046052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.046174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.046216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.046368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.046409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.046660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.046703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.046957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.046983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.047185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.047214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.047347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.047375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.047508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.047537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.047669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.047699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.047874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.047901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.048035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.048075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.048235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.048264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.048451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.048479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.048703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.048748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.048936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.048962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.944 qpair failed and we were unable to recover it. 00:33:39.944 [2024-07-13 07:21:09.049086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.944 [2024-07-13 07:21:09.049111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.049261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.049287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.049462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.049489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.049645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.049676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.049964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.049993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.050114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.050157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.050333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.050359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.050510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.050558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.050715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.050748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.050921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.050949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.051079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.051105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.051290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.051317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.051480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.051509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.051675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.051705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.051847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.051879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.052008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.052034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.052165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.052190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.052335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.052360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.052512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.052539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.052675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.052701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.052853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.052886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.053010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.053035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.053165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.053210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.053375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.053403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.053607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.053643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.053803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.053831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.053993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.054021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.054149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.054176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.054306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.054348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.054675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.054729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.054924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.054952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.055075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.055101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.055303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.055332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.055521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.055550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.055718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.055745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.055862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.055898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.056052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.056078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.056313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.056342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.056550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.056598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.056769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.056795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.945 [2024-07-13 07:21:09.056923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.945 [2024-07-13 07:21:09.056951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.945 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.057082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.057108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.057262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.057291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.057564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.057616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.057756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.057783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.057943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.057982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.058142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.058177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.058333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.058363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.058530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.058556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.058737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.058768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.058923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.058949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.059077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.059103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.059293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.059322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.059512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.059541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.059703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.059729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.059870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.059896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.060020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.060045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.060227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.060252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.060483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.060529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.060694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.060720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.060840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.060872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.060999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.061024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.061145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.061171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.061339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.061367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.061533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.061558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.061698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.061723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.061873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.061899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.062018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.062044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.062161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.062186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.062335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.062381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.062526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.062552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.062674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.062700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.062879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.062905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.063033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.063059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.063185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.063211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.063353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.063389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.063666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.063713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.063913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.063939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.064064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.064090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.064276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.064309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.064578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.946 [2024-07-13 07:21:09.064624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.946 qpair failed and we were unable to recover it. 00:33:39.946 [2024-07-13 07:21:09.064788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.064816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.064974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.065001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.065126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.065151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.065298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.065342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.065503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.065550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.065741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.065769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.065954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.065980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.066111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.066136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.066282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.066307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.066472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.066500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.066691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.066719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.066870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.066899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.067030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.067056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.067198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.067223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.067439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.067487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.067644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.067672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.067822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.067849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.068007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.068046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.068179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.068206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.068378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.068422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.068597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.068646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.068833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.068877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.069005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.069032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.069172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.069199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.069386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.069430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.069625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.069673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.069812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.069840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.069996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.070023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.070167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.070200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.070418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.070465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.070629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.070664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.070862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.070925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.071068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.071106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.071333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.071381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.071551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.071634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.071796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.071824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.071983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.072008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.072132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.072176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.072397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.072442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.072657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.072702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.072840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.947 [2024-07-13 07:21:09.072873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.947 qpair failed and we were unable to recover it. 00:33:39.947 [2024-07-13 07:21:09.073012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.073037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.073221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.073246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.073457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.073498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.073680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.073726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.073892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.073917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.074038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.074062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.074231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.074259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.074459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.074505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.074650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.074692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.074879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.074908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.075043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.075067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.075208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.075249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.075414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.075461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.075655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.075684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.075877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.075905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.076049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.076074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.076226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.076251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.076405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.076433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.076644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.076689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.076886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.076912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.077062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.077091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.077236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.077265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.077491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.077539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.077668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.077696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.077854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.077887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.078048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.078074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.078199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.078224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.078371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.078396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.078568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.078595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.078723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.078751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.078950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.078976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.079090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.079114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.079246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.948 [2024-07-13 07:21:09.079286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.948 qpair failed and we were unable to recover it. 00:33:39.948 [2024-07-13 07:21:09.079444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.079470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.079701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.079728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.079861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.079914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.080058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.080084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.080230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.080255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.080447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.080475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.080631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.080659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.080831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.080856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.081008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.081033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.081162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.081187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.081361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.081407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.081600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.081648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.081771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.081799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.082018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.082058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.082191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.082219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.082476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.082521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.082670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.082715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.082921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.082966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.083135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.083178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.083430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.083480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.083653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.083678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.083848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.083884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.084041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.084085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.084277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.084305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.084493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.084538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.084652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.084678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.084823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.084849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.085002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.085046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.085221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.085264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.085462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.085505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.085668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.085710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.085854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.085887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.086055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.086098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.086290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.086318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.086592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.086640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.086792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.086817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.087008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.087053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.087191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.087233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.087407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.087450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.087621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.949 [2024-07-13 07:21:09.087664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.949 qpair failed and we were unable to recover it. 00:33:39.949 [2024-07-13 07:21:09.087839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.087873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.088048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.088081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.088250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.088278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.088497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.088544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.088737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.088765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.088918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.088944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.089117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.089143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.089300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.089329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.089492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.089520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.089742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.089770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.089945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.089971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.090153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.090181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.090343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.090376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.090548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.090576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.090737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.090765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.090929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.090956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.091100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.091126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.091338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.091366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.091521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.091549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.091711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.091738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.091894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.091935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.092085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.092110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.092270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.092295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.092441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.092469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.092653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.092681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.092837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.092873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.093043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.093068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.093190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.093216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.093363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.093397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.093557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.093586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.093749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.093783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.093926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.093952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.094076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.094102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.094246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.094274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.094504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.094553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.094687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.094712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.094831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.094856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.095013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.095038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.095235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.095263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.095531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.095580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.095723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.095748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.950 qpair failed and we were unable to recover it. 00:33:39.950 [2024-07-13 07:21:09.095896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.950 [2024-07-13 07:21:09.095922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.096048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.096074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.096214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.096242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.096430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.096458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.096614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.096642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.096801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.096828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.096996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.097022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.097207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.097268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.097396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.097424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.097558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.097586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.097762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.097802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.097944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.097971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.098119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.098163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.098327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.098370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.098540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.098574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.098749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.098775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.098928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.098956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.099176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.099219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.099384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.099428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.099600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.099625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.099747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.099774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.099932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.099959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.100137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.100163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.100284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.100310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.100437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.100462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.100624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.100651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.100803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.100829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.101031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.101059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.101256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.101307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.101439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.101467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.101621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.101648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.101782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.101807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.101985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.102011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.102177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.102205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.102340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.102368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.102531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.102559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.102718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.102746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.102891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.102917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.103043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.103068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.103217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.103245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.103402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.951 [2024-07-13 07:21:09.103430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.951 qpair failed and we were unable to recover it. 00:33:39.951 [2024-07-13 07:21:09.103582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.103615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.103783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.103810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.104017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.104043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.104185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.104213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.104344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.104373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.104535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.104563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.104727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.104755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.104955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.104980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.105100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.105125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.105245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.105285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.105476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.105504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.105662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.105691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.105907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.105934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.106057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.106082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.106223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.106251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.106438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.106466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.106634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.106662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.106828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.106852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.106977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.107003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.107129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.107173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.107339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.107367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.107553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.107580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.107750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.107774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.107959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.107985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.108126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.108168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.108298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.108326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.108465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.108507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.108673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.108700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.108840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.108873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.109066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.109091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.109236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.109266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.109425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.109453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.109585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.109613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.109771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.109799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.109976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.110002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.110113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.110138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.110258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.110298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.110435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.110462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.110582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.110609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.110751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.110779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.110954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.952 [2024-07-13 07:21:09.110979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.952 qpair failed and we were unable to recover it. 00:33:39.952 [2024-07-13 07:21:09.111117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.111156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.111342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.111389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.111540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.111586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.111741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.111774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.111919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.111948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.112108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.112135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.112345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.112394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.112596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.112642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.112766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.112794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.112986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.113016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.113150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.113179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.113325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.113355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.113615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.113663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.113801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.113829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.114003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.114029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.114171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.114200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.114378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.114408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.114543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.114574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.114761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.114791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.114954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.114981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.115123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.115166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.115350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.115401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.115549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.115578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.115757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.115785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.115951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.115977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.116099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.116124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.116276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.116306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.116534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.116598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.116763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.116799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.116954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.116981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.117162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.117192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.117364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.117392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.117560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.117588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.117732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.117757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.117935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.953 [2024-07-13 07:21:09.117961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.953 qpair failed and we were unable to recover it. 00:33:39.953 [2024-07-13 07:21:09.118087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.118111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.118283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.118311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.118477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.118504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.118646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.118674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.118831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.118857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.119035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.119061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.119185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.119210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.119355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.119383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.119512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.119540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.119705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.119734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.119898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.119942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.120058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.120084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.120259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.120287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.120468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.120516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.120687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.120715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.120895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.120940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.121055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.121080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.121227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.121252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.121396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.121422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.121578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.121635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.121774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.121803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.121965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.121992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.122196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.122247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.122387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.122415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.122575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.122619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.122747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.122773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.122915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.122941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.123077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.123106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.123267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.123294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.123450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.123478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.123636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.123665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.123806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.123834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.123996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.124023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.124169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.124213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.124412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.124455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.124629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.124672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.124799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.124826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.124975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.125004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.125189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.125217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.125380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.125408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.125586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.954 [2024-07-13 07:21:09.125644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.954 qpair failed and we were unable to recover it. 00:33:39.954 [2024-07-13 07:21:09.125818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.125844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.126014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.126039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.126193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.126218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.126394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.126435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.126613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.126641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.126784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.126810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.126959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.126985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.127133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.127157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.127357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.127403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.127565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.127593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.127750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.127778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.127960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.128000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.128165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.128192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.128353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.128381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.128540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.128570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.128740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.128773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.128944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.128989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.129134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.129160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.129298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.129326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.129505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.129549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.129704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.129733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.129863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.129897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.130020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.130047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.130192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.130222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.130383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.130411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.130544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.130572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.130716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.130741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.130870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.130896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.131014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.131039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.131191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.131219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.131384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.131411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.131546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.131574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.131733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.131761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.131933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.131959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.132108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.132133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.132270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.132297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.132453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.132481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.132607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.132635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.132833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.132858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.133029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.133054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.955 [2024-07-13 07:21:09.133229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.955 [2024-07-13 07:21:09.133257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.955 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.133511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.133558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.133711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.133739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.133918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.133944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.134060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.134085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.134256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.134284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.134415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.134443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.134604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.134632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.134772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.134800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.134977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.135003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.135118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.135143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.135293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.135318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.135466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.135492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.135652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.135680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.135884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.135927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.136075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.136100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.136232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.136257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.136421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.136449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.136615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.136643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.136783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.136808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.136958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.136985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.137132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.137157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.137330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.137358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.137526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.137554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.137689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.137717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.137880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.137906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.138033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.138057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.138198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.138228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.138369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.138397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.138523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.138550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.138712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.138740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.138905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.138948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.139071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.139096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.139231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.139256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.139422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.139450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.139607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.139635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.139782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.139807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.139956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.139982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.140102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.140127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.140241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.140266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.140487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.140533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.140685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.956 [2024-07-13 07:21:09.140713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.956 qpair failed and we were unable to recover it. 00:33:39.956 [2024-07-13 07:21:09.140843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.140881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.141019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.141061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.141238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.141278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.141449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.141477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.141686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.141717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.141881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.141926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.142048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.142090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.142386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.142413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.142578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.142605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.142750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.142774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.142932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.142958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.143078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.143103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.143234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.143259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.143430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.143458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.143580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.143606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.143735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.143762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.143908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.143933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.144053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.144078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.144227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.144267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.144421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.144449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.144608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.144635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.144769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.144793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.144927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.144952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.145067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.145091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.145235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.145258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.145426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.145474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.145661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.145688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.145845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.145879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.146027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.146051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.146167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.146192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.146336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.146360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.146520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.146552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.146682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.146710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.146884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.146927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.147046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.957 [2024-07-13 07:21:09.147071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.957 qpair failed and we were unable to recover it. 00:33:39.957 [2024-07-13 07:21:09.147187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.147212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.147334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.147358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.147527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.147554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.147713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.147740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.147897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.147942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.148063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.148088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.148217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.148242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.148360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.148384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.148537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.148561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.148684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.148709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.148854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.148884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.149004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.149029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.149174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.149198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.149339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.149364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.149482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.149507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.149676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.149700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.149851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.149881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.150008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.150033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.150177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.150202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.150322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.150347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.150472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.150497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.150643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.150668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.150792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.150817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.150966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.150991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.151112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.151137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.151261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.151285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.151411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.151435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.151565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.151589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.151718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.151743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.151897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.151922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.152088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.152112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.152240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.152265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.152386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.152410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.152559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.152584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.152706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.152730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.152846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.152888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.153042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.153067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.153216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.153241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.153385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.153409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.153572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.153600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.153763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.958 [2024-07-13 07:21:09.153791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.958 qpair failed and we were unable to recover it. 00:33:39.958 [2024-07-13 07:21:09.153951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.153976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.154124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.154150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.154269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.154294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.154441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.154465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.154615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.154641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.154758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.154800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.154942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.154967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.155143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.155168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.155315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.155342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.155494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.155519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.155661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.155686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.155812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.155836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.155996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.156021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.156168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.156194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.156336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.156361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.156479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.156503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.156624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.156650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.156780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.156805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.156932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.156957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.157130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.157155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.157293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.157317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.157447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.157472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.157592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.157616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.157729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.157759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.157935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.157960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.158089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.158113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.158259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.158284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.158468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.158492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.158608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.158633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.158755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.158779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.158936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.158961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.159112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.159137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.159264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.159289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.159461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.159485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.159624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.159649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.159769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.159793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.159941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.159966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.160114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.160139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.160274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.160301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.160461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.160486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.160656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.959 [2024-07-13 07:21:09.160681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.959 qpair failed and we were unable to recover it. 00:33:39.959 [2024-07-13 07:21:09.160809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.160833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.160989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.161013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.161195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.161220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.161345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.161369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.161487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.161511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.161659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.161686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.161832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.161857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.161976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.162002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.162160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.162185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.162333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.162361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.162496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.162522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.162658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.162682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.162833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.162857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.162982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.163008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.163163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.163188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.163353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.163381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.163520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.163546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.163688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.163712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.163840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.163871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.163993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.164018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.164164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.164188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.164336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.164361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.164479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.164504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.164658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.164683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.164816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.164841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.165000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.165025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.165177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.165201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.165384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.165413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.165589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.165614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.165737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.165761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.165895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.165931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.166079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.166105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.166233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.166257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.166379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.166405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.166531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.166556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.166699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.166723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.166846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.166879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.167034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.167059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.167185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.167209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.167323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.167348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.167485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.167510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.960 [2024-07-13 07:21:09.167657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.960 [2024-07-13 07:21:09.167681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.960 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.167823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.167851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.168013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.168038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.168157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.168182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.168355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.168380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.168525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.168549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.168717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.168742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.168858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.168888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.169040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.169064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.169219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.169243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.169359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.169384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.169530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.169556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.169696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.169721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.169840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.169872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.170039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.170063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.170242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.170269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.170400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.170428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.170567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.170592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.170735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.170759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.170915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.170941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.171091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.171116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.171297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.171322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.171477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.171502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.171653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.171678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.171850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.171895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.172029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.172056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.172227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.172252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.172374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.172398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.172529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.172553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.172681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.172707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.172840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.172870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.173025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.173050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.173236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.173261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.173377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.173402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.173552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.173577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.173715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.961 [2024-07-13 07:21:09.173740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.961 qpair failed and we were unable to recover it. 00:33:39.961 [2024-07-13 07:21:09.173872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.173898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.174023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.174048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.174170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.174195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.174343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.174368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.174493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.174518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.174659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.174684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.174801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.174825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.175011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.175037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.175163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.175188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.175314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.175339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.175502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.175527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.175674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.175699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.175889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.175932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.176077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.176102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.176232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.176257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.176404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.176429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.176614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.176639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.176783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.176807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.176933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.176959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.177083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.177107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.177322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.177346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.177464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.177489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.177641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.177666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.177784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.177809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.177930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.177956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.178111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.178135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.178267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.178291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.178413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.178442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.178624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.178649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.178788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.178813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.178934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.178960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.179117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.179142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.179273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.179298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.179413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.179438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.179563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.179589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.179732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.179761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.179937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.179963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.180105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.180130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.180254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.180279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.180396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.962 [2024-07-13 07:21:09.180421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.962 qpair failed and we were unable to recover it. 00:33:39.962 [2024-07-13 07:21:09.180591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.180616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.180738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.180763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.180892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.180917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.181037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.181062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.181201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.181226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.181375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.181400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.181545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.181570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.181740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.181765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.181946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.181974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.182143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.182171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.182305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.182330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.182450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.182476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.182589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.182614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.182785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.182810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.182957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.182987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.183116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.183141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.183298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.183322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.183469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.183494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.183660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.183688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.183836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.183861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.183991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.184017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.184135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.184160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.184330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.184355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.184473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.184498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.184642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.184666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.184781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.184806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.184960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.184986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.185104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.185128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.185277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.185302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.185424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.185448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.185606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.185631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.185756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.185782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.185910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.185935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.186051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.186076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.186190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.186214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.186361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.186386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.186500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.186546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.186716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.186742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.186888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.186914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.187054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.187083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.963 [2024-07-13 07:21:09.187222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.963 [2024-07-13 07:21:09.187246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.963 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.187393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.187418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.187570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.187598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.187734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.187759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.187877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.187912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.188056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.188081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.188187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.188212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.188360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.188403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.188587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.188614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.188753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.188777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.188927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.188953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.189103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.189128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.189300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.189325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.189490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.189518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.189704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.189731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.189996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.190022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.190196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.190224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.190366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.190393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.190554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.190578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.190730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.190755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.190902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.190944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.191089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.191113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.191260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.191286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.191445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.191473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.191639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.191664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.191782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.191823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.192011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.192040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.192210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.192235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.192383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.192425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.192620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.192647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.192815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.192841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.193031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.193057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.193196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.193225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.193399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.193424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.193545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.193570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.193716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.193741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.193886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.193911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.194084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.194109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.194300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.194328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.194491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.194515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.194680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.194708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.964 [2024-07-13 07:21:09.194907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.964 [2024-07-13 07:21:09.194935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.964 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.195078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.195106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.195273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.195301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.195486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.195514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.195657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.195681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.195799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.195841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.196021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.196049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.196194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.196218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.196390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.196415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.196562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.196590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.196732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.196756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.196880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.196905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.197059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.197084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.197206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.197230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.197375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.197418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.197547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.197574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.197737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.197764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.197941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.197966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.198139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.198181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.198348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.198373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.198542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.198569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.198734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.198761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.198898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.198923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.199085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.199113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.199309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.199334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.199459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.199484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.199640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.199666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.199804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.199847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.200065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.200094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.200284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.200311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.200452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.200480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.200646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.200671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.200793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.200817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.200996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.201024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.201174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.201200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.201354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.201378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.201544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.201570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.201736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.201761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.201929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.201958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.202136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.202160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.965 qpair failed and we were unable to recover it. 00:33:39.965 [2024-07-13 07:21:09.202315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.965 [2024-07-13 07:21:09.202340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.202480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.202507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.202688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.202713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.202839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.202864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.203048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.203076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.203230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.203257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.203427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.203451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.203607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.203635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.203753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.203781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.203947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.203973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.204098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.204139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.204314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.204340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.204491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.204515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.204684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.204712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.204850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.204884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.205066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.205094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.205222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.205262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.205392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.205420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.205587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.205612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.205734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.205759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.205936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.205964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.206107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.206132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.206303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.206327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.206496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.206524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.206691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.206717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.206869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.206909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.207045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.207072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.207222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.207247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.207389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.207413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.207575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.207602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.207739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.207764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.207918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.207944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.208065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.208090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.208238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.208262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.208409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.208433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.208563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.208587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.966 [2024-07-13 07:21:09.208734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.966 [2024-07-13 07:21:09.208759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.966 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.208880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.208922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.209097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.209126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.209296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.209320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.209506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.209530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.209680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.209723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.209885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.209934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.210135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.210177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.210340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.210368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.210536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.210560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.210727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.210755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.210916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.210945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.211137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.211162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.211328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.211354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.211510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.211538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.211706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.211731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.211908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.211937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.212088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.212115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.212267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.212291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.212439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.212478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.212661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.212685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.212799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.212824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.212954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.212979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.213090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.213115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.213269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.213294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.213413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.213437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.213581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.213605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.213752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.213776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.213944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.213972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.214098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.214126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.214327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.214352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.214477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.214501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.214624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.214649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.214788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.214812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.214961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.214986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.215160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.215204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.215370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.215395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.215557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.215584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.215750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.215777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.215946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.215972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.216087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.216113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.216298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.967 [2024-07-13 07:21:09.216326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.967 qpair failed and we were unable to recover it. 00:33:39.967 [2024-07-13 07:21:09.216494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.216519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.216666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.216691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.216806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.216831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.216994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.217019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.217162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.217202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.217323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.217354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.217524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.217549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.217717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.217745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.217882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.217910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.218075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.218101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.218267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.218295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.218424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.218451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.218603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.218628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.218779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.218803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.218974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.219002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.219165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.219190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.219352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.219380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.219507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.219534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.219724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.219749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.219913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.219941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.220137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.220165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.220343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.220367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.220519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.220543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.220666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.220691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.220855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.220885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.221057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.221083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.221258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.221286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.221483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.221507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.221632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.221657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.221772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.221797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.221941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.221966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.222118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.222160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.222328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.222361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.222526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.222552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.222688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.222731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.222919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.222947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.223096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.223121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.223274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.223299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.223421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.223446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.223622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.223648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.223763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-07-13 07:21:09.223789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.968 qpair failed and we were unable to recover it. 00:33:39.968 [2024-07-13 07:21:09.223914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.223939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.224086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.224112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.224282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.224311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.224476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.224503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.224669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.224694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.224843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.224874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.225028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.225056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.225223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.225248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.225397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.225439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.225626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.225653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.225816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.225841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.226023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.226048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.226178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.226206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.226351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.226376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.226518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.226543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.226663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.226687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.226837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.226862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.227021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.227047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.227188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.227229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.227427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.227453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.227603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.227627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.227749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.227775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.227927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.227952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.228101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.228125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.228315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.228341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.228483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.228508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.228668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.228695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.228856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.228890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.229033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.229058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.229243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.229270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.229414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.229439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.229557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.229582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.229782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.229825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.229996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.230024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.230186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.230212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.230436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.230485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.230647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.230675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.230819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.230844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.231036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.231061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.231215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.231241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.231381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.969 [2024-07-13 07:21:09.231407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.969 qpair failed and we were unable to recover it. 00:33:39.969 [2024-07-13 07:21:09.231544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.231569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.231751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.231779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.231976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.232002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.232150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.232175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.232303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.232334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.232481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.232507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.232699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.232727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.232891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.232933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.233083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.233108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.233277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.233305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.233467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.233496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.233667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.233692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.233857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.233891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.234086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.234111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.234256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.234281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.234479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.234507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.234671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.234698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.234838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.234863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.235050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.235075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.235242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.235272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.235478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.235503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.235676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.235704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.235895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.235939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.236059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.236084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.236208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.236233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.236373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.236401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.236601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.236626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.236796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.236824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.237013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.237038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.237210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.237235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.237402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.237430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.237603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.237631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.237800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.237825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.237958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.237984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.238129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.970 [2024-07-13 07:21:09.238155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.970 qpair failed and we were unable to recover it. 00:33:39.970 [2024-07-13 07:21:09.238276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.238301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.238410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.238435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.238583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.238608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.238851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.238885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.239051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.239077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.239228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.239253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.239378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.239404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.239523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.239548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.239741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.239766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.239916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.239945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.240071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.240096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.240271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.240298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.240438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.240463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.240637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.240662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.240853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.240882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.241008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.241034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.241193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.241233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.241368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.241396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.241566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.241591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.241711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.241736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.241910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.241937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.242088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.242113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.242274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.242301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.242443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.242471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.242608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.242634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.242784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.242812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.242981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.243006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.243125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.243150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.243379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.243407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.243575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.243600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.243745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.243770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.243922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.243948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.244094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.244119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.244299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.244324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.244488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.244516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.244685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.244713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.244880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.244913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.245108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.245136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.245311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.245336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.245510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.245535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.971 [2024-07-13 07:21:09.245708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.971 [2024-07-13 07:21:09.245736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.971 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.245912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.245942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.246117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.246143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.246294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.246319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.246488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.246515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.246687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.246712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.246879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.246908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.247047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.247074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.247253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.247280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.247446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.247478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.247665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.247693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.247883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.247916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.248080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.248108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.248252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.248280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.248482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.248507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.248675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.248705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.248893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.248931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.249090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.249116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.249237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.249277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.249415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.249443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.249638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.249663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.249826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.249854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.250020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.250048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.250192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.250219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.250369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.250410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.250579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.250607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.250800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.250828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.251002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.251028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.251175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.251201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.251372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.251397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.251590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.251618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.251803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.251830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.252030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.252056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.252242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.252267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.252413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.252438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.252651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.252676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.252847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.252880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.253014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.253041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.253206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.253231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.253423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.253451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.253634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.253660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.972 qpair failed and we were unable to recover it. 00:33:39.972 [2024-07-13 07:21:09.253834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.972 [2024-07-13 07:21:09.253860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.254038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.254065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.254237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.254263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.254410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.254435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.254580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.254606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.254740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.254765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.254987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.255013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.255157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.255185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.255350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.255378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.255525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.255550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.255665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.255690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.255887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.255917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.256108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.256133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.256284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.256309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.256454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.256479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.256650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.256675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.256849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.256884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.257047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.257072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.257262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.257287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.257452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.257480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.257634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.257662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.257826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.257851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.258049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.258077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.258239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.258267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.258418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.258443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.258594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.258619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.258804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.258832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.259036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.259062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.259257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.259285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.259425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.259452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.259591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.259617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.259809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.259837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.260040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.260068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.260223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.260249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.260395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.260422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.260582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.260616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.260763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.260788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.260937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.260963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.261105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.261147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.261298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.261323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.261473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.973 [2024-07-13 07:21:09.261515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.973 qpair failed and we were unable to recover it. 00:33:39.973 [2024-07-13 07:21:09.261690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.261717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.261851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.261882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.262027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.262052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.262173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.262198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.262371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.262396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.262589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.262616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.262781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.262809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.262957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.262982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.263128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.263169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.263334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.263362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.263529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.263554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.263715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.263743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.263924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.263950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.264107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.264132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.264307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.264336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.264525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.264553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.264714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.264742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.264952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.264977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.265100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.265125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.265272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.265313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.265495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.265523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.265723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.265747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.265943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.265972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.266132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.266160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.266310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.266336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.266461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.266486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.266654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.266682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.266879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.266913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.267049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.267077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.267240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.267268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.267440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.267465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.267610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.267650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.267816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.267840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.268019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.268045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.268180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.974 [2024-07-13 07:21:09.268212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.974 qpair failed and we were unable to recover it. 00:33:39.974 [2024-07-13 07:21:09.268351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.268379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.268545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.268570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.268686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.268726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.268932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.268957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.269086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.269111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.269232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.269274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.269411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.269440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.269582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.269607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.269756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.269798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.269944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.269970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.270123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.270148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.270266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.270291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.270461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.270489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.270661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.270687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.270811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.270838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.270995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.271020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.271145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.271171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.271374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.271402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.271588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.271616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.271775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.271802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.271967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.271992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.272168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.272193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.272371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.272395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.272567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.272594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.272727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.272757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.272928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.272954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.273122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.273164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.273324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.273351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.273490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.273515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.273659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.273700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.273910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.273954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.274066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.274091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.274239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.274280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.274411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.274439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.274608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.274634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.274805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.274833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.275009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.275034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.275160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.275185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.275305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.275330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.275504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.275536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.975 [2024-07-13 07:21:09.275709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.975 [2024-07-13 07:21:09.275734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.975 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.275907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.275933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.276080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.276106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.276317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.276342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.276593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.276648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.276834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.276862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.277046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.277072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.277274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.277302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.277463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.277489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.277602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.277645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.277812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.277839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.278003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.278029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.278173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.278198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.278321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.278346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.278475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.278502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.278651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.278676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.278793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.278820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.278966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.278992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.279134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.279159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.279304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.279329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.279450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.279477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.279654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.279680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.279788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.279813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.279926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.279952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.280077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.280102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.280257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.280298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.280465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.280493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.280638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.280663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.280811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.280836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.280970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.280996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.281172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.281197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.281382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.281410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.281549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.281577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.281764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.281791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.281952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.281976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.282127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.282172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.282347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.282372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.282522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.282546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.282695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.282721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.282840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.282874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.282998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.283023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.976 qpair failed and we were unable to recover it. 00:33:39.976 [2024-07-13 07:21:09.283194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.976 [2024-07-13 07:21:09.283224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.283360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.283385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.283505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.283532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.283680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.283705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.283853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.283883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.284031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.284056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.284207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.284233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.284356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.284380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.284493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.284517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.284699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.284727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.284911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.284938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.285112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.285154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.285320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.285348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.285483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.285508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.285659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.285684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.285812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.285837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.286061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.286086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.286280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.286308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.286485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.286509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.286683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.286708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.286879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.286922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.287066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.287091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.287216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.287241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.287416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.287440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.287587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.287617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.287768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.287793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.287923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.287949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.288121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.288163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.288333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.288360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.288523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.288551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.288714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.288739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.288915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.288940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.289133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.289161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.289356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.289384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.289558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.289583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.289704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.289747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.289906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.289935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.290086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.290110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.290287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.290333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.290463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.290491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.290695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-07-13 07:21:09.290719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.977 qpair failed and we were unable to recover it. 00:33:39.977 [2024-07-13 07:21:09.290875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.290901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.291104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.291132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.291337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.291362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.291523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.291551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.291722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.291750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.291923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.291958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.292094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.292123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.292254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.292282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.292422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.292447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.292592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.292634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.292822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.292849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.293011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.293037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.293152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.293177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.293335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.293361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.293508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.293533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.293691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.293719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.293885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.293936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.294057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.294082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.294224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.294267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.294436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.294462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.294611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.294636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.294780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.294820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.295000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.295028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.295239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.295264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.295440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.295468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.295668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.295693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.295864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.295895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.296039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.296067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.296217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.296245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.296439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.296464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.296583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.296609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.296763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.296788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.296985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.297011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.297199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.297227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.297383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.978 [2024-07-13 07:21:09.297411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.978 qpair failed and we were unable to recover it. 00:33:39.978 [2024-07-13 07:21:09.297559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.297584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.297738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.297763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.297937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.297970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.298143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.298168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.298331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.298358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.298501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.298527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.298675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.298701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.298840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.298872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.299037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.299065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.299235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.299260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.299411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.299436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.299553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.299578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.299715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.299742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.299942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.299968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.300117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.300159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.300311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.300336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.300514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.300540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.300708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.300736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.300905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.300931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.301097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.301124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.301281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.301309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.301477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.301502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.301690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.301718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.301882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.301914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.302057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.302082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.302266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.302293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.302429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.302457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.302660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.302685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.302830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.302858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.303073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.303099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.303249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.303274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.303418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.303443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.303584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.303611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.303781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.303805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.303997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.304026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.304183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.304211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.304405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.304430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.304592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.304620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.304754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.304782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.304966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.979 [2024-07-13 07:21:09.304991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.979 qpair failed and we were unable to recover it. 00:33:39.979 [2024-07-13 07:21:09.305102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.305144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.305307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.305335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.305531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.305560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.305748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.305776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.305960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.305989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.306155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.306180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.306377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.306404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.306563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.306590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.306726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.306751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.306873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.306899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.307053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.307078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.307262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.307288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.307480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.307508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.307704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.307732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.307905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.307930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.308079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.308104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.308243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.308271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.308445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.308470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.308595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.308620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.308763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.308788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.308938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.308964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.309136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.309164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.309323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.309351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.309494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.309519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.309693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.309733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.309904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.309929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.310098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.310124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.310319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.310347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.310486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.310514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.310662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.310688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.310850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.310883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.311048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.311073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.311257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.311282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.311403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.311429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.311580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.311605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.311754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.311780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.311925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.311950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.312074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.312099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.312274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.312299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.312449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.312492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.980 [2024-07-13 07:21:09.312664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-07-13 07:21:09.312689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.980 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.312844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.312878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.313047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.313076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.313242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.313270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.313436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.313461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.313628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.313656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.313786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.313814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.313968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.313995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.314148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.314173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.314357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.314383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.314527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.314553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.314671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.314697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.314848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.314879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.315066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.315091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.315254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.315282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.315472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.315500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.315662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.315687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.315830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.315855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.316035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.316063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.316237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.316262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.316380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.316405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.316557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.316582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.316702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.316727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.316880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.316923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.317092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.317120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.317235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.317259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.317432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.317457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.317667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.317692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.317883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.317925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.318104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.318144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.318282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.318309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.318508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.318533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.318732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.318759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.318922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.318950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.319144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.319170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.319358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.319383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.319532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.319558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.319743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.319769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.319939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.319967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.320135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.320175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.320344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.320369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.981 qpair failed and we were unable to recover it. 00:33:39.981 [2024-07-13 07:21:09.320488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.981 [2024-07-13 07:21:09.320514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.320663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.320692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.320842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.320872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.321025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.321055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.321227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.321256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.321448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.321474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.321633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.321659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.321801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.321844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.322058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.322083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.322230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.322258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.322440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.322468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.322607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.322632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.322759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.322786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.322932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.322957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.323132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.323156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.323315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.323343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.323498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.323526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.323717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.323742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.323918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.323944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.324089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.324114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.324341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.324366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.324495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.324523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.324685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.324715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.324890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.324924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.325063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.325091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.325287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.325314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.325481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.325507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.325696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.325724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.325909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.325935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.326059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.326084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.326234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.326276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.326406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.326434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.326572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.326597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.326721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.326748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.326931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.326960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.327097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.327122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.327273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.327313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.327502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.327530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.327677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.327701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.327853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.327884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.328059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.982 [2024-07-13 07:21:09.328087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.982 qpair failed and we were unable to recover it. 00:33:39.982 [2024-07-13 07:21:09.328257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.328286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.328404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.328430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.328579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.328604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.328814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.328839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.329015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.329043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.329211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.329239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.329378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.329403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.329555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.329580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.329725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.329768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.329912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.329937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.330089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.330114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.330271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.330297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.330418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.330442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.330591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.330617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.330757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.330785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.330966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.330993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.331123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.331148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.331270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.331295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.331443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.331470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.331589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.331614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.331737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.331762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.331882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.331915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.332063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.332088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.332240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.332268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.332430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.332455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.332650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.332678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.332870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.332898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.333071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.333097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.333286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.333314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.333465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.333493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.333657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.333682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.333803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.333846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.334000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.334028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.334208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.334233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.983 [2024-07-13 07:21:09.334348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.983 [2024-07-13 07:21:09.334393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.983 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.334534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.334562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.334722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.334747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.334886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.334930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.335060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.335088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.335237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.335264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.335404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.335433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.335610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.335638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.335808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.335833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.335991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.336016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.336129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.336154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.336309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.336334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.336502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.336530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.336688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.336716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.336954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.336980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.337168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.337196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.337356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.337384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.337577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.337602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.337761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.337789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.337940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.337967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.338093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.338117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.338266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.338308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.338442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.338470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.338616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.338641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.338759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.338784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.338933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.338959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.339110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.339136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.339287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.339313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.339430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.339456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.339659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.339685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.339872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.339930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.340064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.340092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.340243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.340268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.340446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.340490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.340617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.340644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.340832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.340857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.341029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.341056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.341221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.341246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.341392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.341417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.341620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.341647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.341799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.984 [2024-07-13 07:21:09.341827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.984 qpair failed and we were unable to recover it. 00:33:39.984 [2024-07-13 07:21:09.342001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.342027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.342153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.342193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.342326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.342354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.342494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.342520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.342709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.342737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.342885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.342920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.343091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.343116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.343276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.343301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.343447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.343472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.343642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.343667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.343858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.343892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.344053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.344078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.344250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.344275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.344398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.344423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.344568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.344593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.344770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.344798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.344969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.344995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.345187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.345215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.345384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.345409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.345537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.345562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.345701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.345726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.345879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.345906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.346046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.346071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.346197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.346222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.346368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.346393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.346509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.346551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.346689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.346716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.346886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.346916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.347112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.347137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.347303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.347331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.347517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.347544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.347709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.347737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.347876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.347902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.348094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.348122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.348282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.348311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.348449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.348477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.348643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.348667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.348810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.348851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.349020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.349048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.349176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.349204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.985 [2024-07-13 07:21:09.349372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.985 [2024-07-13 07:21:09.349397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.985 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.349548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.349573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.349723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.349765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.349929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.349957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.350095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.350120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.350240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.350265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.350408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.350436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.350628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.350654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.350826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.350854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.351035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.351061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.351206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.351233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.351371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.351398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.351590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.351615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.351784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.351812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.351967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.351996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.352166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.352192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.352334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.352359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.352552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.352580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.352711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.352739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.352907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.352937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.353080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.353105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.353249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.353274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.353422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.353450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.353579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.353607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.353752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.353777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.353907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.353948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.354151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.354176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.354322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.354347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.354494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.354520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.354712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.354741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.354918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.354949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.355093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.355123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.355300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.355329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.355517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.355545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.355710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.355738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.355918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.355943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.356088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.356114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.356281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.356309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.356447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.356475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.356636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.356666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.356872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.986 [2024-07-13 07:21:09.356909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.986 qpair failed and we were unable to recover it. 00:33:39.986 [2024-07-13 07:21:09.357060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.357089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.357242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.357270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.357394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.357421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.357573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.357599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.357720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.357746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.357904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.357933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.358074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.358102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.358255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.358282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.358452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.358494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.358636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.358666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.358799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.358824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.358958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.358984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.359100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.359125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.359260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.359290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.359452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.359480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.359655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.359681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.359855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.359889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.360034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.360059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.360192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.360219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.360375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.360402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.360556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.360600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.360757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.360784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.360949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.360977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.361140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.361165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.361295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.361337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.361476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.361504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.361670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.361695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.361823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.361848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.361986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.362011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.362144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.362177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.362288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.362313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.362433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.362462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.362594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.362619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.362742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.362771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.362912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.362939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.363060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.363087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.363234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.363259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.987 [2024-07-13 07:21:09.363411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.987 [2024-07-13 07:21:09.363436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.987 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.363586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.363611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.363728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.363754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.363881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.363907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.364030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.364056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.364174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.364218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.364384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.364409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.364575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.364600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.364735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.364760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.364914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.364939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.365088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.365112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.365238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.365264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.365415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.365440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.365573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.365602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.365750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.365775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.365903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.365928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.366047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.366072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.366186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.366229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.366381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.366406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.366519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.366544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.366715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.366740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.366888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.366939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.367064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.367090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.367238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.367263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.367384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.367409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.367554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.367579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.367694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.367719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.367847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.367877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.368007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-07-13 07:21:09.368033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.988 qpair failed and we were unable to recover it. 00:33:39.988 [2024-07-13 07:21:09.368156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.368181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.368330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.368356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.368502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.368527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.368650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.368697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.368870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.368899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.369068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.369101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.369255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.369280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.369430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.369455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.369566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.369591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.369710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.369735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.369853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.369886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.370027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.370055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.370188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.370218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.370381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.370407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.370606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.370633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.370797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.370825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.370980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.371005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.371128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.371153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.371297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.371322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.371448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.989 [2024-07-13 07:21:09.371473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:39.989 qpair failed and we were unable to recover it. 00:33:39.989 [2024-07-13 07:21:09.371648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.371673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.371825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.371850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.372017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.372043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.372156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.372182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.372371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.372396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.372517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.372542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.372685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.372710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.372860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.372915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.373042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.373069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.373228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.373253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.373376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.373402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.373580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.373605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.373735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.373760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.373883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.373916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.374039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.374084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.374230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.374258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.374421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.374448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.374613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.374638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.374769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.374794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.374914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.374940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.375054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.375080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.375228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.375253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.375401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.375426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.375541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.375567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.375715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.375740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.375882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.375936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.376052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.376077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.376208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.376233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.376405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.376446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.376591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.376618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.376770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.376795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.376956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.376984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.377158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.377183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.377330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.377355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.377480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.377505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.377666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.377694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.377822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.377850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.378000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.378025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.378149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.378174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.378293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.378318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.378465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.378490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.378614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.378639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.378762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.378802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.378963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.378992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.379134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.379162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.379303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.379330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.379481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.379507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.379622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.379647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.379761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.379786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.379936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.379962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.380108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.380133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.380297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.380322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.380476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.380501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.380620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.380645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.380785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.380828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.381000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.381026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.381149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.381174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.381300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.381325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.381483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.381511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.381641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.381669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.381836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.381872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.382011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.382038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.382156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.382181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.382349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.382374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.382511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.382535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.382681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.382710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.265 qpair failed and we were unable to recover it. 00:33:40.265 [2024-07-13 07:21:09.382836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.265 [2024-07-13 07:21:09.382883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.383018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.383046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.383194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.383219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.383401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.383426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.383544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.383570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.383688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.383713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.383856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.383886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.384051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.384076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.384234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.384259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.384384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.384409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.384539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.384567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.384710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.384736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.384856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.384887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.385014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.385039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.385151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.385176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.385367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.385392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.385537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.385562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.385692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.385717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.385884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.385912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.386078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.386103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.386272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.386297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.386434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.386462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.386627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.386655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.386797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.386822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.386979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.387005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.387173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.387201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.387368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.387396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.387532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.387557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.387722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.387749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.387929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.387954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.388075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.388100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.388246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.388271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.388416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.388441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.388602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.388627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.388771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.388797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.388958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.388984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.389124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.389150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.389294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.389319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.389471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.389499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.389671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.389700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.389851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.389882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.390051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.390079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.390207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.390235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.390382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.390407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.390554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.390579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.390722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.390750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.390889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.390917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.391057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.391082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.391209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.391234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.391382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.391407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.391528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.391569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.391738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.391763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.391882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.391907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.392026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.392052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.392171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.392196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.392318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.392344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.392460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.392485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.392609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.392634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.392775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.392802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.392951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.392977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.393101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.393127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.393250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.393275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.393433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.393461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.393609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.393635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.393760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.393785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.393912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.393954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.394124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.394152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.394318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.394343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.394456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.394481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.394620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.394648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.394791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.394819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.394979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.395005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.395171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.395199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.395365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.395392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.395557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.395584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.395724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.395751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.395880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.395923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.396051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.266 [2024-07-13 07:21:09.396080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.266 qpair failed and we were unable to recover it. 00:33:40.266 [2024-07-13 07:21:09.396253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.396280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.396447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.396477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.396592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.396619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.396739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.396765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.396890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.396935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.397057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.397083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.397231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.397257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.397375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.397401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.397574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.397600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.397798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.397826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.397985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.398011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.398131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.398157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.398331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.398358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.398506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.398532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.398679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.398704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.398825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.398851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.399008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.399034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.399154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.399179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.399312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.399337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.399513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.399541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.399672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.399699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.399842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.399871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.400025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.400050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.400220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.400245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.400369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.400394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.400515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.400540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.400661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.400686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.400863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.400894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.400935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d0480 (9): Bad file descriptor 00:33:40.267 [2024-07-13 07:21:09.401108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.401148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.401276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.401304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.401454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.401499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.401644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.401687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.401875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.401926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.402076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.402105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.402257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.402298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.402482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.402509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.402683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.402710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.402851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.402889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.403077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.403105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.403248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.403275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.403433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.403460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.403656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.403707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.403852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.403884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.404002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.404027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.404198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.404242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.404385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.404432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.404601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.404646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.404774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.404800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.404957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.405001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.405145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.405174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.405347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.405391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.405569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.405596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.405719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.405747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.405885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.405931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.406097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.406131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.406311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.406359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.406575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.406629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.406798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.406822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.406968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.406996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.407128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.407158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.407292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.407320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.407523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.407568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.407729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.407756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.407905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.407931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.408059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.408084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.408227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.267 [2024-07-13 07:21:09.408253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.267 qpair failed and we were unable to recover it. 00:33:40.267 [2024-07-13 07:21:09.408410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.408437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.408596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.408624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.408773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.408801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.408950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.408976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.409123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.409166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.409302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.409359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.409514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.409556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.409704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.409730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.409883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.409917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.410082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.410126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.410266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.410310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.410482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.410532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.410664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.410691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.410812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.410838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.410992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.411036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.411157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.411189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.411358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.411387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.411523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.411548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.411669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.411696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.411831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.411879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.412066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.412096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.412264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.412291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.412446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.412492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.412700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.412747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.412882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.412924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.413091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.413119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.413279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.413307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.413433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.413461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.413689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.413741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.413876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.413903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.414049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.414098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.414270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.414314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.414485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.414529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.414652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.414678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.414853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.414888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.415079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.415106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.415280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.415324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.415567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.415617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.415732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.415758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.415885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.415916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.416060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.416108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.416283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.416326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.416491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.416518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.416673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.416700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.416884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.416927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.417067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.417095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.417264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.417292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.417447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.417492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.417649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.417677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.417847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.417881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.418028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.418053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.418198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.418226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.418435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.418480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.418754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.418799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.418944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.418970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.419139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.419167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.419363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.419391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.419556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.419583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.419751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.419778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.419963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.420002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.420135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.420162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.420298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.420328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.420569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.420614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.420769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.420797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.420943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.420971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.421139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.421182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.421328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.421359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.421576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.421619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.421739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.421766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.421914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.421943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.422107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.422151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.422309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.422352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.422524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.422567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.422714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.422742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.422873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.422918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.423096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.423124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.423307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.423335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.423489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.268 [2024-07-13 07:21:09.423517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.268 qpair failed and we were unable to recover it. 00:33:40.268 [2024-07-13 07:21:09.423652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.423677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.423827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.423852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.424055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.424083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.424227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.424255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.424420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.424448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.424623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.424669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.424827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.424853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.425024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.425067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.425243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.425272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.425439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.425467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.425596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.425625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.425789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.425818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.425960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.425985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.426101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.426126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.426283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.426308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.426480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.426508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.426690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.426734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.426859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.426893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.427073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.427098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.427243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.427271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.427408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.427436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.427571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.427598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.427728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.427756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.427956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.427982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.428102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.428127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.428276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.428301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.428414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.428457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.428621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.428649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.428881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.428924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.429076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.429101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.429271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.429299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.429534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.429583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.429771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.429798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.429973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.429999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.430143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.430169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.430326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.430374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.430564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.430591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.430728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.430756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.430943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.430969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.431086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.431111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.431261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.431286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.431427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.431452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.431572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.431597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.431782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.431840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.432019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.432058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.432227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.432266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.432423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.432452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.432672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.432700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.432847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.432879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.433001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.433026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.433163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.433190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.433374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.433402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.433567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.433601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.433774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.433805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.433986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.434014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.434182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.434211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.434446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.434501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.434731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.434782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.434947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.434975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.435106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.435131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.435310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.435335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.435490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.435515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.435704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.435755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.435932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.435958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.436075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.436100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.436269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.436331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.436523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.436568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.436818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.436871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.437009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.437035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.437177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.437220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.437411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.437465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.269 qpair failed and we were unable to recover it. 00:33:40.269 [2024-07-13 07:21:09.437647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.269 [2024-07-13 07:21:09.437674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.437800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.437825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.437971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.437998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.438134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.438163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.438361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.438407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.438631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.438659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.438796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.438821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.438943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.438969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.439097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.439122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.439284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.439312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.439537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.439565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.439699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.439727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.439902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.439928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.440057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.440084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.440261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.440308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.440494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.440540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.440670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.440699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.440848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.440902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.441089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.441117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.441295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.441324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.441485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.441514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.441690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.441721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.441903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.441946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.442093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.442119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.442285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.442311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.442461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.442489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.442661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.442691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.442845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.442880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.443044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.443070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.443193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.443219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.443418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.443447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.443593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.443637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.443775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.443804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.443983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.444010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.444133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.444159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.444315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.444341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.444519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.444548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.444710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.444739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.444881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.444908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.445027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.445053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.445207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.445233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.445352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.445382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.445518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.445547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.445708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.445737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.445880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.445907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.446058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.446084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.446215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.446241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.446398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.446428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.446554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.446583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.446760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.446803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.446980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.447008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.447178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.447221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.447398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.447440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.447617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.447664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.447808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.447833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.447998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.448024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.448149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.448174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.448350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.448378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.448509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.448537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.448669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.448697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.448834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.448859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.449016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.449042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.449170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.449195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.449318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.449343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.449471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.449496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.449622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.449647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.449787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.449815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.449978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.450004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.450138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.450163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.450282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.450307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.450431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.450456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.450585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.450612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.450767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.450795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.450955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.450981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.451160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.451186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.451337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.451366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.451515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.451541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.451677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.270 [2024-07-13 07:21:09.451703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.270 qpair failed and we were unable to recover it. 00:33:40.270 [2024-07-13 07:21:09.451878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.451904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.452054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.452080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.452236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.452260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.452417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.452446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.452600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.452625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.452773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.452798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.452957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.452983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.453129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.453154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.453320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.453345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.453497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.453522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.453649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.453673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.453799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.453824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.453955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.453981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.454105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.454131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.454257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.454282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.454429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.454455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.454599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.454624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.454795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.454823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.455056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.455084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.455242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.455267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.455407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.455433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.455603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.455628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.455769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.455798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.455978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.456004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.456120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.456146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.456293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.456334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.456524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.456552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.456687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.456713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.456864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.456894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.457018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.457044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.457176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.457201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.457348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.457373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.457543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.457571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.457762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.457787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.457958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.457986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.458147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.458175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.458320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.458346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.458524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.458564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.458703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.458730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.458901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.458925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.459051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.459077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.459222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.459246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.459467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.459492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.459660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.459691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.459852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.459886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.460062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.460087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.460198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.460239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.460401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.460429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.460606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.460630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.460776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.460800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.460926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.460950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.461094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.461119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.461287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.461314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.461473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.461500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.461663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.461687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.461806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.461831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.461983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.462008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.462160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.462185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.462359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.462392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.462569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.462596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.462778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.462805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.462987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.463013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.463132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.463158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.463276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.463300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.463428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.463469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.463655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.463683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.463829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.463853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.463982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.464008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.464151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.464179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.464370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.464394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.464523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.464548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.271 qpair failed and we were unable to recover it. 00:33:40.271 [2024-07-13 07:21:09.464692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.271 [2024-07-13 07:21:09.464718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.464839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.464863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.465020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.465045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.465192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.465219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.465364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.465389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.465563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.465588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.465736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.465763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.465924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.465949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.466077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.466102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.466275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.466316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.466487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.466511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.466654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.466679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.466877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.466910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.467050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.467075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.467193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.467218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.467388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.467415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.467560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.467585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.467700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.467724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.467874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.467899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.468025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.468050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.468174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.468215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.468381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.468409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.468554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.468578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.468739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.468766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.468954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.468980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.469118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.469143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.469327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.469354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.469516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.469543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.469680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.469705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.469879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.469904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.470063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.470090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.470225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.470251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.470422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.470465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.470627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.470654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.470794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.470818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.470961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.470987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.471158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.471186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.471327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.471352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.471498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.471545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.471719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.471744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.471872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.471897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.472045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.472069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.472254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.472282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.472460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.472485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.472677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.472705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.472829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.472856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.473009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.473034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.473161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.473186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.473358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.473382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.473570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.473595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.473790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.473817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.473991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.474017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.474137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.474165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.474316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.474341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.474517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.474545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.474706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.474734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.474893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.474935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.475086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.475111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.475280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.475304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.475427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.475452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.475619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.475645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.475812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.475838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.475966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.476009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.476175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.476201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.476369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.476394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.476513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.476538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.476690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.476714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.476902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.476926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.477095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.477123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.477310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.477338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.477503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.477527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.477652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.477693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.477831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.477860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.478026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.478050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.478171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.478196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.478369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.478397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.478590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.272 [2024-07-13 07:21:09.478615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.272 qpair failed and we were unable to recover it. 00:33:40.272 [2024-07-13 07:21:09.478784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.478811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.478978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.479002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.479151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.479176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.479299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.479342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.479530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.479557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.479756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.479781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.479948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.479977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.480162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.480190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.480361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.480386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.480501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.480526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.480650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.480674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.480847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.480880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.481020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.481044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.481163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.481189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.481312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.481337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.481482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.481513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.481658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.481685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.481834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.481858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.482004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.482045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.482180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.482207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.482379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.482404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.482579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.482619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.482755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.482784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.482979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.483004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.483172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.483200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.483335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.483364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.483532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.483557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.483723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.483751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.483885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.483913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.484077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.484102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.484293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.484321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.484463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.484490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.484678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.484702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.484894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.484922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.485059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.485086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.485245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.485270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.485434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.485461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.485616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.485643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.485806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.485830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.485999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.486028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.486172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.486199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.486359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.486384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.486515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.486559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.486758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.486782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.486936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.486962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.487113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.487139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.487311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.487339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.487507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.487532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.487724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.487752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.487913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.487943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.488089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.488114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.488284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.488309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.488480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.488507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.488695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.488722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.488902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.488928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.489081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.489109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.489280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.489304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.489506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.489531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.489655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.489680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.489803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.489827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.489951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.489976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.490122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.490148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.490318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.490344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.490510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.490538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.490674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.490701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.490892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.490917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.491086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.491113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.491273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.491301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.491441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.491466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.491614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.491639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.491861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.491892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.492067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.492093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.492233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.492262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.492433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.492458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.492603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.492627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.492781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.492807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.492944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.492988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.273 [2024-07-13 07:21:09.493131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.273 [2024-07-13 07:21:09.493156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.273 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.493307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.493332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.493475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.493499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.493649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.493675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.493849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.493885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.494078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.494106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.494272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.494297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.494411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.494436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.494616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.494644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.494891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.494933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.495082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.495108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.495284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.495311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.495482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.495507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.495634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.495677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.495842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.495878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.496052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.496077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.496245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.496272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.496429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.496457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.496619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.496648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.496797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.496839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.497009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.497035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.497186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.497212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.497382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.497410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.497571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.497598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.497761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.497787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.497934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.497976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.498109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.498136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.498280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.498304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.498454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.498479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.498603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.498627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.498767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.498791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.498969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.498998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.499166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.499193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.499362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.499387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.499510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.499535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.499658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.499683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.499828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.499853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.500017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.500043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.500190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.500215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.500339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.500364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.500510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.500536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.500722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.500747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.500956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.500981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.501169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.501197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.501354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.501382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.501534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.501559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.501709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.501734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.501882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.501908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.502055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.502082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.502258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.502286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.502443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.502666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.502691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.502855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.502890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.503048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.503075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.503247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.503273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.503434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.503460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.503579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.503603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.503755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.503780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.503945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.503973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.504135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.504163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.504340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.504364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.504490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.504515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.504690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.504718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.504876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.504900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.505074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.505117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.505275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.505302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.505468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.505493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.505639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.505681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.505834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.505859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.506015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.506040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.506162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.506188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.274 [2024-07-13 07:21:09.506365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.274 [2024-07-13 07:21:09.506407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.274 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.506551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.506576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.506698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.506722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.506926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.506954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.507116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.507140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.507304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.507332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.507491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.507518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.507681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.507705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.507824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.507870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.508060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.508088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.508243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.508268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.508417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.508459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.508648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.508674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.508846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.508879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.509053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.509083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.509226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.509268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.509440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.509464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.509611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.509637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.509784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.509825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.509978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.510003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.510175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.510200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.510312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.510336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.510509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.510533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.510698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.510725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.510863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.510897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.511068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.511093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.511254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.511282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.511468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.511495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.511701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.511727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.511918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.511947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.512084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.512112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.512311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.512335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.512499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.512526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.512690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.512717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.512857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.512890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.513073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.513101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.513297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.513321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.513465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.513490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.513614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.513639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.513813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.513840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.514016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.514041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.514210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.514238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.514401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.514428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.514595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.514620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.514784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.514811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.514984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.515010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.515182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.515206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.515328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.515353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.515504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.515528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.515679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.515702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.515849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.515896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.516063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.516090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.516254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.516278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.516424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.516450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.516599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.516647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.516819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.516844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.516962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.517006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.517169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.517197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.517373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.517398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.517546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.517588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.517721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.517748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.517923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.517949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.518119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.518146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.518279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.518306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.518470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.518495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.518662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.518690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.518846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.518879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.519023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.519048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.519210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.519234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.519355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.519379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.519532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.519558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.519709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.519749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.519941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.519970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.520137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.520162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.520307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.275 [2024-07-13 07:21:09.520348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.275 qpair failed and we were unable to recover it. 00:33:40.275 [2024-07-13 07:21:09.520513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.520540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.520707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.520733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.520905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.520946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.521083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.521110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.521258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.521283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.521407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.521431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.521628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.521653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.521825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.521854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.522006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.522030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.522220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.522246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.522393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.522418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.522592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.522616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.522746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.522775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.522950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.522976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.523170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.523198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.523327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.523355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.523555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.523580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.523710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.523739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.523881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.523909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.524058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.524087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.524300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.524329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.524463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.524491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.524631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.524655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.524809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.524834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.524997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.525023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.525145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.525170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.525353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.525378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.525557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.525583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.525726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.525751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.525875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.525901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.526070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.526095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.526242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.526268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.526462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.526490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.526668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.526694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.526841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.526872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.527007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.527049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.527182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.527210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.527358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.527384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.527529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.527570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.527708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.527735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.527875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.527901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.528026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.528052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.528224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.528249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.528369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.528394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.528545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.528571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.528714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.528742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.528913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.528938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.529099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.529126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.529285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.529313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.529462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.529487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.529608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.529632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.529785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.529813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.529959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.529986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.530137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.530162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.530304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.530329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.530455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.530481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.530635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.530660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.530802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.530827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.530976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.531001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.531123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.531152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.531274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.531299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.531454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.531479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.531646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.531676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.531845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.531876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.532030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.532055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.532175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.532217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.532351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.532378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.276 [2024-07-13 07:21:09.532573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.276 [2024-07-13 07:21:09.532598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.276 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.532791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.532819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.532976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.533001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.533115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.533141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.533294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.533337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.533493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.533520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.533688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.533717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.533960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.533986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.534174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.534202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.534370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.534395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.534557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.534585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.534749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.534776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.534922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.534947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.535121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.535146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.535326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.535351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.535503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.535528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.535721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.535749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.535910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.535938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.536078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.536104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.536256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.536280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.536453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.536480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.536625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.536650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.536799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.536824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.536975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.537001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.537147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.537172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.537295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.537320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.537462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.537487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.537635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.537660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.537823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.537852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.538021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.538047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.538197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.538221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.538386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.538415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.538577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.538610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.538771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.538800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.538943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.538969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.539112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.539137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.539319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.539344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.539535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.539562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.539725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.539753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.539924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.539950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.540075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.540099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.540275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.540303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.540471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.540497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.540620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.540646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.540814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.540841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.541014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.541038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.541220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.541248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.541387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.541414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.541596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.541621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.541773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.541816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.541992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.542017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.542132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.542156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.542279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.542303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.542454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.542482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.542659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.542684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.542854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.542897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.543072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.543097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.543246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.543271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.543462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.543490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.543672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.543696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.543871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.543897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.544067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.544096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.544254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.544283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.544422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.544447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.544639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.544666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.544876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.544901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.545051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.545075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.545213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.545241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.545402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.545430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.545568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.545593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.545724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.545749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.545930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.545956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.546099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.546128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.546324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.546351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.546545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.546570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.546750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.546775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.546944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.546972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.547144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.547169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.277 [2024-07-13 07:21:09.547340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.277 [2024-07-13 07:21:09.547364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.277 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.547527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.547553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.547741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.547769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.547931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.547957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.548082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.548106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.548288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.548315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.548456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.548481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.548665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.548692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.548847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.548880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.549041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.549066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.549249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.549277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.549452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.549476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.549627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.549652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.549820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.549849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.550045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.550074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.550221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.550245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.550386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.550410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.550527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.550552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.550727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.550751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.550919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.550947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.551108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.551135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.551337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.551363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.551482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.551507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.551655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.551680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.551839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.551886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.552054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.552078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.552276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.552303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.552441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.552466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.552608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.552633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.552810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.552838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.553010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.553035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.553196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.553224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.553395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.553420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.553591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.553617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.553734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.553762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.553877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.553903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.554075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.554099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.554234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.554262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.554426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.554454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.554599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.554623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.554793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.554835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.555018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.555043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.555163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.555189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.555343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.555367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.555552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.555576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.555720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.555745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.555875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.555917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.556080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.556107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.556309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.556334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.556478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.556504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.556631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.556655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.556828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.556855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.557020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.557045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.557186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.557213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.557360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.557385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.557579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.557607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.557791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.557818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.557996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.558022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.558166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.558206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.558340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.558367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.558540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.558565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.558732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.558760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.558894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.558921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.559090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.559115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.559228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.559271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.559457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.559484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.559655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.559680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.559832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.559857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.560025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.560050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.560199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.560223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.560414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.560442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.560610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.278 [2024-07-13 07:21:09.560638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.278 qpair failed and we were unable to recover it. 00:33:40.278 [2024-07-13 07:21:09.560799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.560827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.560979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.561003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.561167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.561199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.561371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.561397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.561588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.561615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.561775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.561802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.561941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.561967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.562091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.562116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.562286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.562314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.562484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.562510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.562638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.562663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.562835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.562882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.563050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.563074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.563213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.563256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.563410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.563438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.563602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.563629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.563804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.563832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.564001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.564027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.564176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.564201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.564366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.564392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.564577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.564605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.564775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.564799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.564964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.564993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.565198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.565224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.565345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.565370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.565543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.565569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.565712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.565740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.565917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.565943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.566058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.566100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.566270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.566299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.566497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.566522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.566723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.566751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.566912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.566940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.567108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.567133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.567288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.567313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.567463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.567505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.567671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.567697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.567826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.567852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.568041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.568066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.568255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.568280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.568401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.568425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.568574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.568599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.568740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.568773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.568995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.569020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.569164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.569191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.569337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.569363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.569511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.569536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.569661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.569686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.569838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.569863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.570016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.570041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.570183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.570207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.570377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.570403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.570568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.570595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.570786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.570811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.570986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.571012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.571132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.571158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.571314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.571355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.571504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.571528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.571700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.571725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.571887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.571916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.572115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.572139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.572349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.572373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.572489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.572514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.572683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.572711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.572912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.572938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.573079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.573104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.573271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.573296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.573462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.573490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.573665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.573689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.573842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.573872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.573989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.574014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.574136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.574161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.574315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.574340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.574509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.574537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.574706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.574732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.574905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.574930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.575093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.575121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.575282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.279 [2024-07-13 07:21:09.575310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.279 qpair failed and we were unable to recover it. 00:33:40.279 [2024-07-13 07:21:09.575453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.575477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.575596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.575622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.575815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.575843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.576022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.576047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.576207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.576239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.576397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.576425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.576598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.576624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.576750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.576793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.576981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.577008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.577181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.577207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.577376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.577404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.577570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.577598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.577741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.577765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.577918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.577943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.578116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.578145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.578313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.578337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.578506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.578532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.578671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.578699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.578876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.578902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.579051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.579091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.579262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.579286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.579458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.579483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.579620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.579649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.579816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.579843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.580020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.580044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.580212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.580240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.580410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.580435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.580611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.580635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.580810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.580836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.581039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.581064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.581211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.581236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.581433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.581461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.581618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.581645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.581814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.581839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.582017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.582046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.582214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.582242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.582407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.582432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.582600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.582628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.582814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.582841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.582982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.583007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.583195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.583223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.583400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.583426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.583599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.583624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.583743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.583768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.583883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.583912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.584045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.584070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.584185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.584210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.584332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.584357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.584510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.584534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.584646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.584688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.584839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.584872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.585065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.585090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.585262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.585290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.585454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.585482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.585627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.585652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.585837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.585882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.586052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.586078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.586252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.586277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.586417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.586444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.586571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.586600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.586738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.586763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.586940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.586984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.587184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.587209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.587387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.587411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.587578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.587607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.587824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.587852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.588030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.588056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.588221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.588249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.588385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.588412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.588586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.588611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.588773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.588801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.588996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.589025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.589193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.589219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.589391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.589434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.589564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.589591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.589785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.589810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.590005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.590034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.590206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.280 [2024-07-13 07:21:09.590231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.280 qpair failed and we were unable to recover it. 00:33:40.280 [2024-07-13 07:21:09.590377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.590402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.590525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.590567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.590769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.590797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.590999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.591025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.591170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.591195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.591343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.591368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.591509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.591537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.591656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.591697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.591859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.591893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.592033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.592058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.592178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.592203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.592353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.592377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.592558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.592582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.592747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.592775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.592945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.592972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.593114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.593139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.593310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.593338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.593500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.593527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.593671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.593697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.593890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.593919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.594089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.594117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.594319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.594343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.594474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.594502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.594659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.594687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.594832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.594856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.595039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.595063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.595222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.595247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.595395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.595420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.595591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.595618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.595774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.595801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.595961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.595987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.596118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.596144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.596330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.596357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.596529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.596554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.596722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.596750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.596911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.596939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.597109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.597134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.597263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.597288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.597442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.597467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.597652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.597678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.597858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.597911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.598088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.598123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.598301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.598335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.598471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.598498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.598678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.598705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.598881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.598906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.599056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.599098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.599302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.599327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.599501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.599526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.599694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.599722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.599887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.599916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.600074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.600099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.600214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.600238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.600388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.600416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.600608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.600633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.600768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.600797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.600968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.600992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.601133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.601157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.601284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.601308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.601510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.601537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.601685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.601709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.601862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.601893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.602021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.602045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.602187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.602212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.602398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.602426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.602612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.602640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.602808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.602832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.603012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.603037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.603162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.603189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.603364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.603389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.603547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.603574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.281 qpair failed and we were unable to recover it. 00:33:40.281 [2024-07-13 07:21:09.603750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.281 [2024-07-13 07:21:09.603775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.603911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.603947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.604114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.604147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.604316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.604340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.604514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.604539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.604727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.604754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.604924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.604953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.605097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.605122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.605248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.605273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.605450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.605474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.605592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.605616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.605765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.605807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.605963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.605988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.606106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.606131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.606281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.606321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.606463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.606489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.606664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.606689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.606873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.606899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.607074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.607103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.607304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.607328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.607473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.607501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.607655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.607683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.607843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.607880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.607995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.608035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.608233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.608258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.608384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.608409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.608610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.608638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.608800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.608827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.609007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.609031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.609181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.609207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.609333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.609357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.609518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.609542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.609684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.609725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.609876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.609903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.610077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.610101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.610238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.610266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.610429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.610457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.610620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.610644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.610760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.610784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.610950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.610978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.611148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.611173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.611292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.611334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.611498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.611531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.611699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.611723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.611877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.611902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.612018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.612044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.612174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.612199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.612324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.612349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.612501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.612527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.612700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.612725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.612882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.612908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.613044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.613068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.613213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.613238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.613389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.613413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.613527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.613551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.613669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.613693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.613812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.613837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.613997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.614023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.614141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.614165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.614321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.614346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.614482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.614510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.614680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.614704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.614831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.614889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.615050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.615078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.615222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.615246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.615367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.615391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.615516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.615559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.615716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.615740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.615860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.615892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.616044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.616069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.616191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.616215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.616392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.616417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.616563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.616588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.616735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.616761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.616918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.616944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.617060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.617085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.617206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.617231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.282 [2024-07-13 07:21:09.617377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.282 [2024-07-13 07:21:09.617401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.282 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.617546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.617573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.617731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.617759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.617899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.617945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.618090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.618115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.618248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.618277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.618423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.618448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.618622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.618663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.618802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.618827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.618964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.618988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.619134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.619158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.619297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.619322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.619443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.619484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.619651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.619677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.619818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.619842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.619972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.620013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.620175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.620203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.620371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.620395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.620550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.620578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.620764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.620790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.620934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.620960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.283 [2024-07-13 07:21:09.621080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.283 [2024-07-13 07:21:09.621104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.283 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.621252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.621277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.621399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.621426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.621596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.621622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.621763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.621792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.621956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.621982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.622105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.622130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.622277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.622303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.622416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.622441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.622596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.622622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.622751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.622777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.622929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.622956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.623109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.623135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.623262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.623287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.623411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.623436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.623567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.623609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.623766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.623794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.623935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.623961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.624111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.624136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.624288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.624317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.624482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.624508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.624651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.624677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.624823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.624851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.625001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.625027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.625154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.625183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.625327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.625353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.625502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.625528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.625654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.625680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.625799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.625825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.625973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.626000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.626129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.626155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.626274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.626300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.626439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.626465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.626627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.626657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.626819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.626848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.627027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.627053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.627206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.627232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.627407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.627438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.627616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.627642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.627765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.627792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.627943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.627971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.628136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.628162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.628301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.628327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.628492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.628522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.628718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.628744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.628890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.628918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.629048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.629075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.629225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.629252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.629397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.629423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.629547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.629575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.629735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.629764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.629918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.629945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.630067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.630093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.630218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.630244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.630392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.630419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.630594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.630623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.630767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.630793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.630932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.630959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.631107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.631136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.631342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.631367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.631493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.631519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.631643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.631669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.631848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.631879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.632004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.632030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.632202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.632232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.284 qpair failed and we were unable to recover it. 00:33:40.284 [2024-07-13 07:21:09.632396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.284 [2024-07-13 07:21:09.632422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.632548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.632574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.632697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.632723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.632876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.632903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.633050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.633077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.633199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.633225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.633390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.633417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.633589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.633618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.633784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.633813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.633985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.634013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.634142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.634186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.634327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.634358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.634527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.634554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.634703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.634729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.634931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.634957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.635076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.635103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.635253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.635279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.635407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.635433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.635591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.635619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.635771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.635797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.635925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.635953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.636089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.636115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.636279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.636305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.636457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.636484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.636627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.636653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.636821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.636850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.637010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.637037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.637183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.637208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.637327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.637352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.637501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.637528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.637681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.637707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.637849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.637881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.638039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.638082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.638232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.638258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.638406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.638431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.638555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.638581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.638754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.638783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.638964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.638991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.639109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.639134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.639331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.639361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.639539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.639568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.639705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.639732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.639881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.639908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.640077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.640106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.640299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.640328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.640466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.640493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.640622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.640649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.640801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.640828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.640953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.640979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.641108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.641136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.641251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.641294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.641465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.641491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.641621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.641648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.641778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.641805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.641950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.641976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.642101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.642127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.642253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.642278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.642432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.642459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.642606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.642632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.642759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.642786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.642918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.642944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.643094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.643121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.643243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.643269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.643458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.643485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.643609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.643635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.643808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.643836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.644030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.644057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.644176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.644203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.644323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.644349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.644477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.644503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.644655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.644681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.644859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.644892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.645029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.645055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.645184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.645210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.645362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.645390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.645559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.645584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.645757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.645786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.645988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.285 [2024-07-13 07:21:09.646015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.285 qpair failed and we were unable to recover it. 00:33:40.285 [2024-07-13 07:21:09.646143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.646169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.646292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.646321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.646496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.646525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.646675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.646702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.646829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.646855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.646984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.647012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.647158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.647183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.647327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.647354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.647523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.647549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.647693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.647719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.647876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.647902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.648051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.648080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.648225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.648251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.648423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.648452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.648588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.648617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.648813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.648839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.648988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.649015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.649155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.649185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.649366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.649392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.649539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.649565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.649716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.649742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.649894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.649921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.650040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.650066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.650221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.650247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.650365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.650392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.650512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.650538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.650687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.650713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.650905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.650948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.651102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.651128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.651303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.651328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.651454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.651479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.651622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.651648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.651793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.651819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.651935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.651961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.652112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.652155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.652289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.652319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.652451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.652477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.652604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.652630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.652805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.652833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.653023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.653050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.653173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.653199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.653353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.653406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.653576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.653602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.653750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.653776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.653932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.653958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.654083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.654110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.654236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.654262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.654414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.654442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.654563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.654589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.654771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.654797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.654919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.654946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.655109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.655135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.655306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.655335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.655500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.655528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.655719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.655745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.655878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.655905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.656047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.656073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.656201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.656227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.656375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.656420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.656562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.656592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.656738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.656764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.656919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.656947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.657112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.657141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.657334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.657360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.657479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.657505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.657631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.657658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.657823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.657851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.658006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.658032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.658189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.658215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.658386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.658411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.286 [2024-07-13 07:21:09.658564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.286 [2024-07-13 07:21:09.658590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.286 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.658702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.658729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.658878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.658904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.659053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.659078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.659202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.659228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.659358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.659383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.659498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.659524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.659654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.659681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.659823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.659850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.659999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.660026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.660177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.660207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.660393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.660424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.660554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.660581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.660745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.660773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.660924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.660950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.661075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.661101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.661272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.661298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.661445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.661471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.661593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.661619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.661787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.661813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.661990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.662018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.662165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.662194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.662358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.662386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.662540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.662565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.662757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.662786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.662994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.663022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.663172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.663198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.663346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.663372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.663538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.663567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.663706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.663733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.663854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.663886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.664078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.664108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.664274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.664300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.664431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.664457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.664587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.664612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.664777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.664806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.664976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.665003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.665154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.665180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.665320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.665361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.665574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.665621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.665804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.665849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.666014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.666042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.666162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.666190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.666377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.666405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.666643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.666695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.666860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.666904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.667044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.667071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.667272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.667333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.667504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.667535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.667724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.667752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.667953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.667980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.668123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.668154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.668289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.668316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.668457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.668485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.668665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.668694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.668832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.668861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.669067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.669093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.669264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.669293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.669449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.669477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.669639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.669668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.669829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.669858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.670059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.670085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.670214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.670239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.670356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.670398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.670537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.670566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.670706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.670735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.670899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.670944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.671064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.671090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.671253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.671283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.671455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.671484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.671648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.671678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.671860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.671906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.672050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.672079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.672205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.672233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.672376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.672424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.672601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.672644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.672794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.287 [2024-07-13 07:21:09.672823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.287 qpair failed and we were unable to recover it. 00:33:40.287 [2024-07-13 07:21:09.672989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.673017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.673200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.673248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.673366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.673393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.673598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.673642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.673797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.673824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.674009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.674054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.674217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.674262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.674442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.674472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.674610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.674638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.674789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.674817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.674957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.675006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.675150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.675195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.675369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.675396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.675547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.675592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.675719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.675753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.675952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.675992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.676149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.676177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.676313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.676340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.676470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.676497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.676625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.676652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.676765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.676791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.676941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.676969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.677119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.677146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.677299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.677325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.677522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.677551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.677696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.677726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.677912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.677940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.678094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.678124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.678308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.678353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.678521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.678566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.678740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.678767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.678916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.678947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.679121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.679166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.679339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.679383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.679531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.679576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.679763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.679789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.679964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.680009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.680181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.680224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.680367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.680411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.680551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.680595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.680748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.680775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.680962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.681007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.681166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.681211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.681393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.681424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.681743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.681797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.681993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.682021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.682148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.682175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.682325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.682355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.682505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.682559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.682715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.682743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.682931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.682958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.683107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.683132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.683255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.683281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.683468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.683499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.683664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.683700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.683904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.683932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.684084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.684111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.684257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.684287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.684450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.684479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.684669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.684697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.684841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.684873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.685026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.685052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.685200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.685227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.685356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.685382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.685508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.685534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.685741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.685775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.685951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.685978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.686098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.686125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.686283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.288 [2024-07-13 07:21:09.686310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.288 qpair failed and we were unable to recover it. 00:33:40.288 [2024-07-13 07:21:09.686433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.686459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.686607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.686634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.686783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.686812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.686984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.687010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.687136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.687162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.687313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.687339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.687474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.687502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.687670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.687710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.687858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.687891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.688015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.688041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.688214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.688242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.688399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.688428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.688624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.688657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.688825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.688854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.689040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.689066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.689202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.689243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.689376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.689404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.689546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.689575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.689735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.689764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.689925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.689952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.690097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.690123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.690268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.690295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.690436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.690465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.690672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.690724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.690915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.690955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.691126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.691154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.691330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.691385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.691570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.691599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.691772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.691800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.691951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.691977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.692101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.692127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.692268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.692293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.692486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.692515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.692657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.692688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.692830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.692859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.693035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.693062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.693231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.693257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.693408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.693435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.693613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.693642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.693807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.693838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.693990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.694017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.289 [2024-07-13 07:21:09.694204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.289 [2024-07-13 07:21:09.694245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.289 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.694420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.694472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.694676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.694721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.694890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.694936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.695084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.695130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.695303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.695348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.695543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.695587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.695734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.695760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.695957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.696003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.696176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.696222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.696364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.696409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.696585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.696617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.696798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.696824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.697007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.697040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.697182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.697212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.697375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.697404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.697540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.697570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.697729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.697758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.697943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.697971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.698150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.698190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.698388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.698418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.698606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.698638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.698764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.698789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.698933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.698958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.699077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.699105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.699269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.699314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.699472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.699501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.699624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.699653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.699825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.699852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.700022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.700050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.700221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.700250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.700467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.700496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.700685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.700714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.700848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.700885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.701066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.701093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.701244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.701270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.701465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.701499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.701696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.701726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.701911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.701938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.290 [2024-07-13 07:21:09.702088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.290 [2024-07-13 07:21:09.702115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.290 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.702290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.702316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.702452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.702482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.702614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.702644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.702783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.702809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.702956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.702983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.703132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.703177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.703380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.703407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.703531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.703561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.703719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.703750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.703914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.703941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.704088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.704114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.704276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.704305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.704488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.704518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.704660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.704703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.704857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.704908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.705032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.705058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.705248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.705274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.705505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.705555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.705698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.705729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.705931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.705959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.706107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.706164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.706334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.706364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.706534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.706562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.706752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.706781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.706943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.706970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.707120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.707162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.707345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.707398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.707654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.707707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.707877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.707933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.708091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.708118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.708308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.708334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.708503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.708533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.708667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.708696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.708849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.708882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.709067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.709094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.709255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.709285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.709486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.709515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.709702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.571 [2024-07-13 07:21:09.709732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.571 qpair failed and we were unable to recover it. 00:33:40.571 [2024-07-13 07:21:09.709881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.709927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.710107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.710134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.710306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.710335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.710541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.710598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.710777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.710820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.711001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.711029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.711176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.711202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.711347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.711373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.711548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.711574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.711767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.711794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.711957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.711984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.712107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.712134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.712286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.712313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.712487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.712514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.712686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.712720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.712927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.712954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.713108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.713135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.713302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.713331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.713497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.713525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.713697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.713723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.713877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.713904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.714055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.714098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.714268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.714295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.714449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.714493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.714664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.714691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.714883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.714926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.715058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.715088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.715281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.715310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.715493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.715520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.715724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.715754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.715947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.715974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.716101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.716128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.716305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.716349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.716515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.716545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.716743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.716772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.716947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.716974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.717104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.717130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.717309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.717335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.717505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.717536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.717708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.572 [2024-07-13 07:21:09.717736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.572 qpair failed and we were unable to recover it. 00:33:40.572 [2024-07-13 07:21:09.717911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.717938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.718097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.718124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.718304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.718334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.718514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.718540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.718712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.718741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.718912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.718943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.719114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.719141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.719270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.719314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.719471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.719500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.719685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.719714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.719888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.719933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.720082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.720110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.720327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.720354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.720508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.720534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.720658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.720689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.720839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.720876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.721045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.721071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.721225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.721252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.721377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.721404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.721585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.721612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.721789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.721819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.722021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.722049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.722203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.722230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.722350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.722376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.722547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.722577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.722767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.722797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.722968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.722995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.723147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.723174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.723318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.723347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.723479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.723508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.723675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.723705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.723856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.723889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.724057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.724084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.724237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.724263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.724390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.724417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.724593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.724620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.724773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.724799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.724948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.724974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.725182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.725208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.725382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.725407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.573 qpair failed and we were unable to recover it. 00:33:40.573 [2024-07-13 07:21:09.725577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.573 [2024-07-13 07:21:09.725608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.725774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.725804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.725974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.726001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.726155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.726182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.726323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.726349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.726502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.726529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.726664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.726694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.726872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.726902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.727037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.727064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.727190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.727217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.727420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.727450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.727627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.727654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.727768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.727812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.728009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.728036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.728190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.728223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.728375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.728402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.728550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.728577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.728724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.728751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.728943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.728973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.729137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.729166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.729322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.729348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.729521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.729550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.729737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.729766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.729965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.729992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.730166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.730195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.730369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.730396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.730547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.730574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.730726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.730752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.730936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.730963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.731141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.731167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.731315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.731342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.731472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.731498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.731671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.731697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.731811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.731837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.731991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.732018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.732143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.732169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.732314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.732340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.732489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.732516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.732672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.732699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.732862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.732895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.574 [2024-07-13 07:21:09.733071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.574 [2024-07-13 07:21:09.733098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.574 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.733227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.733253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.733405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.733431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.733611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.733637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.733785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.733812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.733963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.733990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.734178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.734207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.734362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.734389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.734505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.734549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.734711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.734740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.734903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.734931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.735083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.735126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.735314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.735344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.735556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.735582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.735765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.735800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.735963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.735994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.736192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.736218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.736370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.736396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.736546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.736574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.736747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.736773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.736943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.736972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.737136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.737165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.737334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.737361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.737511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.737538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.737687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.737732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.737876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.737903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.738049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.738096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.738260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.738290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.738440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.738466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.738620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.738665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.738854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.738890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.739091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.739117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.739248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.739275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.739450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.575 [2024-07-13 07:21:09.739494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.575 qpair failed and we were unable to recover it. 00:33:40.575 [2024-07-13 07:21:09.739678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.739708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.739918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.739945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.740058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.740084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.740261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.740287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.740434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.740463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.740652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.740680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.740883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.740910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.741060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.741089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.741252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.741281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.741414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.741440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.741616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.741642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.741828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.741857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.742035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.742061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.742231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.742260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.742417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.742446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.742587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.742614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.742794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.742837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.742979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.743006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.743156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.743183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.743336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.743363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.743556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.743589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.743765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.743791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.743954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.743984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.744145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.744175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.744309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.744336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.744490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.744516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.744666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.744692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.744875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.744902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.745041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.745071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.745234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.745263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.745459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.745486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.745653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.745682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.745849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.745885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.746052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.746079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.746261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.746291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.746436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.746464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.746637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.746663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.746831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.746861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.747071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.747098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.747245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.747272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.576 [2024-07-13 07:21:09.747445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.576 [2024-07-13 07:21:09.747489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.576 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.747625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.747654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.747817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.747843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.748046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.748075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.748272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.748301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.748488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.748515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.748625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.748667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.748862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.748898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.749073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.749099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.749264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.749293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.749457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.749483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.749658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.749685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.749891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.749918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.750091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.750118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.750242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.750269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.750472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.750501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.750642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.750671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.750873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.750900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.751042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.751071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.751234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.751264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.751433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.751463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.751586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.751615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.751811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.751841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.752017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.752044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.752213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.752242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.752413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.752439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.752594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.752621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.752768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.752798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.752959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.752989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.753159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.753186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.753332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.753358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.753530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.753558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.753712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.753743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.753898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.753942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.754074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.754102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.754281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.754308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.754472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.754501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.754667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.754696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.754887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.754915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.755065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.755092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.755284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.577 [2024-07-13 07:21:09.755313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.577 qpair failed and we were unable to recover it. 00:33:40.577 [2024-07-13 07:21:09.755453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.755480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.755605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.755633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.755758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.755784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.755936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.755964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.756141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.756185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.756361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.756388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.756577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.756603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.756788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.756818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.757007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.757034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.757184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.757211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.757332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.757359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.757506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.757552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.757745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.757772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.757970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.758000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.758165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.758194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.758366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.758393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.758524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.758550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.758704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.758748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.758922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.758950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.759095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.759126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.759317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.759343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.759492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.759519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.759669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.759696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.759847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.759890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.760070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.760097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.760217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.760245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.760444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.760474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.760620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.760646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.760798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.760824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.760968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.760995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.761148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.761175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.761388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.761418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.761607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.761636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.761811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.761838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.762010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.762039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.762185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.762214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.762387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.762414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.762585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.762614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.762751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.762780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.762950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.578 [2024-07-13 07:21:09.762979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.578 qpair failed and we were unable to recover it. 00:33:40.578 [2024-07-13 07:21:09.763176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.763205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.763397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.763426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.763599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.763626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.763752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.763780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.763935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.763978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.764173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.764199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.764374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.764404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.764545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.764575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.764720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.764747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.764894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.764921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.765093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.765123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.765271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.765297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.765443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.765486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.765650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.765679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.765857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.765889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.766037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.766064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.766217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.766246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.766415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.766441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.766566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.766611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.766781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.766817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.766980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.767008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.767134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.767160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.767312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.767341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.767530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.767557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.767750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.767779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.767934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.767961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.768090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.768117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.768259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.768286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.768424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.768454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.768598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.768625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.768773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.768817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.768986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.769016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.769211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.769237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.579 [2024-07-13 07:21:09.769386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.579 [2024-07-13 07:21:09.769414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.579 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.769570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.769597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.769744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.769772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.769925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.769953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.770126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.770155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.770351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.770378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.770539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.770568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.770730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.770759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.770922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.770949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.771118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.771148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.771320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.771347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.771524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.771551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.771688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.771717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.771895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.771925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.772099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.772125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.772290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.772319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.772487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.772513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.772680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.772711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.772880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.772923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.773096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.773123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.773314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.773341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.773539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.773568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.773753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.773782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.773942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.773968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.774095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.774121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.774280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.774307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.774494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.774524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.774642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.774684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.774823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.774853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.775008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.775034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.775181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.775208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.775401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.775430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.775579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.775605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.775724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.775751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.775917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.775947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.776144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.776170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.776342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.776371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.776529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.776559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.776704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.776731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.580 [2024-07-13 07:21:09.776851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.580 [2024-07-13 07:21:09.776882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.580 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.777039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.777069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.777265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.777291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.777487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.777515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.777711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.777740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.777905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.777932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.778053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.778080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.778282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.778312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.778451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.778477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.778602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.778629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.778774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.778803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.778972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.778999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.779167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.779197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.779356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.779385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.779553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.779580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.779767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.779797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.779991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.780021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.780165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.780191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.780337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.780380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.780519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.780549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.780714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.780744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.780903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.780948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.781067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.781093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.781264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.781291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.781434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.781464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.781622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.781651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.781849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.781882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.782027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.782062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.782251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.782280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.782424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.782450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.782560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.782586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.782758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.782787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.782950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.782977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.783144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.783174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.783318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.783347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.783518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.783544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.783734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.783763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.783940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.783968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.784117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.784144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.784288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.784315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.784456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.784483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.581 [2024-07-13 07:21:09.784667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.581 [2024-07-13 07:21:09.784694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.581 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.784842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.784891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.785050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.785079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.785279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.785305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.785472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.785501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.785635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.785664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.785862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.785895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.786059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.786089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.786251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.786280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.786452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.786480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.786641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.786670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.786842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.786888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.787067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.787094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.787270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.787304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.787493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.787522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.787686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.787715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.787879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.787924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.788063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.788090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.788238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.788265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.788461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.788492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.788667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.788694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.788834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.788860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.789037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.789067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.789275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.789302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.789452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.789479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.789650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.789679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.789801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.789830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.790008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.790035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.790205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.790235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.790399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.790426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.790568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.790595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.790755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.790784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.790981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.791008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.791180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.791206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.791356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.791382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.791527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.791569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.791766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.791792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.582 [2024-07-13 07:21:09.791944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.582 [2024-07-13 07:21:09.791971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.582 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.792145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.792172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.792319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.792346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.792519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.792548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.792705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.792734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.792883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.792911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.793036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.793062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.793204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.793231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.793345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.793372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.793548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.793591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.793779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.793808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.793955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.793982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.794158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.794185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.794379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.794408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.794608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.794635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.794824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.794853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.795042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.795076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.795245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.795272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.795438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.795469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.795632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.795661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.795826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.795852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.796031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.796060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.796250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.796276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.796425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.796452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.796579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.796606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.796760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.796786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.796911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.796938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.797063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.797091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.797263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.797307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.797449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.797477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.797602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.797630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.797754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.797780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.797934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.797962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.798131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.798160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.798315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.798344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.798546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.798572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.798737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.798765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.798910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.798940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.799110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.799138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.583 [2024-07-13 07:21:09.799308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.583 [2024-07-13 07:21:09.799338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.583 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.799490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.799520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.799705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.799735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.799897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.799939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.800102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.800128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.800303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.800331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.800459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.800487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.800659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.800703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.800876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.800903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.801027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.801073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.801266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.801295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.801440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.801468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.801618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.801666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.801836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.801872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.802032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.802058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.802184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.802231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.802397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.802427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.802622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.802653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.802792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.802823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.803031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.803059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.803213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.803239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.803391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.803436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.803624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.803653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.803791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.803817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.803972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.804016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.804179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.804209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.804386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.804414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.804535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.804579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.804709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.804738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.804934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.804961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.805122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.805151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.805285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.805315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.805463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.805491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.805660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.805690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.805880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.584 [2024-07-13 07:21:09.805910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.584 qpair failed and we were unable to recover it. 00:33:40.584 [2024-07-13 07:21:09.806074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.806100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.806270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.806300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.806471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.806500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.806703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.806729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.806897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.806927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.807061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.807090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.807263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.807289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.807403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.807446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.807589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.807618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.807801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.807830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.808005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.808032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.808200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.808229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.808391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.808418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.808608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.808637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.808825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.808856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.809038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.809064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.809235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.809264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.809461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.809490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.809690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.809717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.809850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.809885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.810053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.810081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.810211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.810239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.810410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.810444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.810603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.810633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.810783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.810811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.810959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.810986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.811158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.811184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.811334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.811360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.811512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.811538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.811712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.811742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.811888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.811925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.812072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.812099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.812274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.812303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.812470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.812497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.812648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.812691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.812830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.812860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.813053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.585 [2024-07-13 07:21:09.813080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.585 qpair failed and we were unable to recover it. 00:33:40.585 [2024-07-13 07:21:09.813257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.813283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.813485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.813514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.813670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.813700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.813840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.813875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.814052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.814079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.814250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.814277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.814404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.814433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.814568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.814597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.814794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.814821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.814955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.814983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.815166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.815192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.815338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.815364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.815535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.815564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.815768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.815794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.815969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.815997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.816123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.816150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.816325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.816369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.816545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.816571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.816713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.816739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.816948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.816975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.817149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.817176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.817298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.817325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.817470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.817496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.817678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.817705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.817856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.817889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.818044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.818093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.818263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.818290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.818450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.818480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.818639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.818668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.818856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.818899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.819065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.819092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.819279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.819308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.819452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.819479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.819596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.819622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.819823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.819853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.820027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.820054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.820213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.820242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.820431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.820460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.820608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.820635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.820831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.820861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.586 qpair failed and we were unable to recover it. 00:33:40.586 [2024-07-13 07:21:09.821034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.586 [2024-07-13 07:21:09.821064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.821237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.821264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.821424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.821454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.821657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.821684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.821858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.821891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.822027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.822057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.822209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.822238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.822436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.822463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.822651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.822681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.822806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.822836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.823024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.823052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.823216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.823245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.823411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.823441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.823588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.823616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.823762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.823805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.824000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.824027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.824157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.824184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.824376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.824405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.824541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.824570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.824742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.824769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.824946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.824975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.825108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.825138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.825272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.825299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.825449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.825491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.825658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.825687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.825846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.825883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.826053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.826083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.826254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.826280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.826459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.826485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.826675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.826705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.826842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.826881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.827073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.827099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.827270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.827300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.827467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.827496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.827665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.827691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.827841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.827894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.828033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.828062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.587 [2024-07-13 07:21:09.828210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.587 [2024-07-13 07:21:09.828238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.587 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.828381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.828424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.828634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.828661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.828802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.828829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.828984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.829011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.829158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.829201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.829346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.829373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.829524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.829551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.829667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.829694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.829837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.829864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.830001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.830028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.830178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.830204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.830347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.830374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.830542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.830571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.830760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.830789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.831007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.831033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.831190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.831220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.831382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.831411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.831580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.831606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.831801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.831830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.832017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.832045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.832220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.832247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.832410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.832439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.832629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.832657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.832838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.832871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.833022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.833048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.833223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.833252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.833449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.833476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.833645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.833679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.833876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.833903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.834025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.834052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.834200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.834228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.834423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.834452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.834618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.834644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.588 qpair failed and we were unable to recover it. 00:33:40.588 [2024-07-13 07:21:09.834770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.588 [2024-07-13 07:21:09.834798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.834971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.834999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.835149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.835176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.835342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.835372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.835564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.835593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.835739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.835765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.835889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.835917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.836118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.836148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.836291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.836318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.836439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.836465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.836646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.836675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.836806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.836832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.836987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.837031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.837170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.837199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.837398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.837424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.837560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.837589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.837783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.837809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.837924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.837949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.838101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.838129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.838331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.838361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.838527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.838555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.838733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.838762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.838936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.838963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.839077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.839104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.839257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.839284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.839487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.839516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.839690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.839717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.839917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.839947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.840122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.840151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.840320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.840347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.840537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.840567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.840707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.840736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.840882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.840909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.841101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.841129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.841267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.841302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.841502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.841529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.841726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.841755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.841935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.841962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.842140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.842167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.842309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.842338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.842505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.589 [2024-07-13 07:21:09.842534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.589 qpair failed and we were unable to recover it. 00:33:40.589 [2024-07-13 07:21:09.842706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.842733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.842927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.842958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.843126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.843153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.843281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.843308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.843469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.843498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.843623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.843652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.843823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.843850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.844063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.844093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.844251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.844280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.844422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.844449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.844604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.844630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.844823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.844852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.845031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.845057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.845216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.845245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.845435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.845464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.845634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.845660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.845782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.845826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.846010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.846038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.846189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.846215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.846385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.846414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.846546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.846576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.846766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.846793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.846916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.846944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.847092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.847118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.847259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.847286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.847477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.847507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.847638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.847668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.847870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.847897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.848069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.848098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.848272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.848300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.848428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.848456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.848649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.848678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.848876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.848906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.849046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.849078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.849232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.849276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.849442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.849473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.849674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.849701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.849860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.849897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.850064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.850093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.850291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.850318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.590 qpair failed and we were unable to recover it. 00:33:40.590 [2024-07-13 07:21:09.850507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.590 [2024-07-13 07:21:09.850537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.850727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.850756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.850898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.850925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.851051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.851078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.851201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.851228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.851411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.851437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.851605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.851634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.851800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.851831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.851976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.852003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.852160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.852187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.852395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.852425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.852622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.852649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.852789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.852819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.853024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.853051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.853177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.853203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.853373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.853414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.853591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.853619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.853740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.853767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.853915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.853942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.854093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.854119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.854273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.854302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.854471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.854500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.854627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.854656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.854818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.854845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.855027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.855057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.855222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.855253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.855417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.855443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.855638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.855667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.855814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.855842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.856031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.856058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.856252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.856282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.856471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.856501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.856703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.856731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.856913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.856947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.857113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.857140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.857308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.857335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.857471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.857508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.857699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.857728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.857927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.857953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.858085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.858114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.858282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.591 [2024-07-13 07:21:09.858311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.591 qpair failed and we were unable to recover it. 00:33:40.591 [2024-07-13 07:21:09.858504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.858530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.858646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.858691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.858858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.858918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.859093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.859119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.859293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.859319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.859495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.859522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.859671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.859698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.859890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.859944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.860096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.860121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.860288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.860314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.860467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.860494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.860667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.860696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.860843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.860879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.861012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.861055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.861228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.861257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.861427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.861453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.861605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.861632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.861748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.861775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.861922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.861949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.862109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.862135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.862283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.862309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.862485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.862512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.862638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.862665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.862815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.862841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.862988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.863017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.863168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.863195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.863315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.863340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.863464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.863491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.863636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.863662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.863810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.863837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.863995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.864021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.864196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.864223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.864368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.864399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.864520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.864547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.864678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.864705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.864855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.864888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.592 [2024-07-13 07:21:09.865040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.592 [2024-07-13 07:21:09.865066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.592 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.865185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.865211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.865362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.865389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.865536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.865562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.865714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.865742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.865894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.865933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.866080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.866107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.866238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.866264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.866415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.866441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.866560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.866586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.866718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.866744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.866895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.866932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.867057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.867083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.867218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.867244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.867406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.867432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.867611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.867637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.867808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.867835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.867980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.868007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.868154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.868191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.868343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.868369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.868491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.868518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.868667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.868695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.868845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.868878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.869033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.869059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.869174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.869200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.869317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.869343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.869493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.869519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.869668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.869694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.869814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.869841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.870009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.870036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.870158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.870189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.870366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.870392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.870534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.870561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.870729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.870758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.870938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.870965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.871088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.593 [2024-07-13 07:21:09.871114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.593 qpair failed and we were unable to recover it. 00:33:40.593 [2024-07-13 07:21:09.871276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.871307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.871433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.871459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.871577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.871603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.871725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.871751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.871877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.871914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.872087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.872116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.872313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.872340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.872485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.872511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.872629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.872654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.872798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.872824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.872988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.873015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.873163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.873189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.873368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.873395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.873545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.873572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.873728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.873754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.873891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.873926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.874062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.874088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.874243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.874269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.874415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.874441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.874590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.874616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.874760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.874787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.874941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.874967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.875120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.875145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.875293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.875320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.875469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.875495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.875645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.875671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.875818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.875845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.876031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.876072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.876233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.876263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.876407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.876454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.876603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.876649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.876773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.876804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.876958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.876996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.877147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.877191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.877321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.877349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.877571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.877616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.877757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.594 [2024-07-13 07:21:09.877799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.594 qpair failed and we were unable to recover it. 00:33:40.594 [2024-07-13 07:21:09.877950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.877977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.878119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.878148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.878319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.878349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.878487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.878516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.878658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.878698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.878880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.878938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.879077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.879105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.879260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.879291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.879480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.879509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.879646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.879676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.879851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.879885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.880046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.880073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.880193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.880220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.880468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.880497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.880659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.880688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.880827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.880856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.881055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.881081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.881242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.881271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.881459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.881488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.881655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.881684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.881848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.881884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.882076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.882102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.882228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.882255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.882427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.882453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.882600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.882626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.882803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.882832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.883019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.883046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.883173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.883200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.883355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.883384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.883514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.883545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.883709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.883746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.883891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.883927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.884099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.884125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.884274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.884303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.884439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.884468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.884607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.884636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.884806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.884835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.885016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.885044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.885164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.885190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.885311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.885342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.595 [2024-07-13 07:21:09.885478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.595 [2024-07-13 07:21:09.885507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.595 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.885668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.885697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.885832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.885861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.886062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.886089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.886255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.886282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.886426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.886452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.886593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.886622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.886783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.886813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.886989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.887016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.887145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.887172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.887330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.887357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.887511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.887537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.887698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.887725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.887850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.887886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.888041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.888069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.888208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.888235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.888364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.888390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.888544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.888571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.888743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.888772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.888941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.888969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.889096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.889123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.889278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.889305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.889467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.889498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.889659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.889689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.889829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.889856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.890071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.890100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.890292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.890321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.890545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.890599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.890768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.890798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.890972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.890999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.891147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.891178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.891364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.891390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.891537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.891563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.891729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.891766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.891974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.892001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.892121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.892148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.892297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.892323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.892464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.892495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.892659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.892689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.892836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.892862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.596 [2024-07-13 07:21:09.893020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.596 [2024-07-13 07:21:09.893047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.596 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.893194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.893238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.893369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.893399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.893533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.893562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.893730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.893759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.893916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.893943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.894068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.894096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.894239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.894269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.894404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.894434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.894623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.894652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.894796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.894824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.894983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.895010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.895122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.895149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.895321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.895348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.895595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.895624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.895785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.895814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.895986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.896013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.896143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.896170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.896296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.896338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.896477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.896506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.896697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.896724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.896877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.896904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.897029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.897056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.897232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.897258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.897409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.897436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.897563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.897589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.897743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.897769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.897916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.897944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.898068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.898095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.898215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.898241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.898409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.898441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.898571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.898598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.898775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.898802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.898950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.898977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.899096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.899124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.597 qpair failed and we were unable to recover it. 00:33:40.597 [2024-07-13 07:21:09.899298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.597 [2024-07-13 07:21:09.899325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.899479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.899505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.899630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.899658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.899778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.899804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.899950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.899978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.900106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.900133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.900277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.900304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.900452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.900478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.900627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.900654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.900803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.900830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.900983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.901011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.901208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.901238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.901409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.901436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.901581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.901608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.901758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.901785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.901945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.901973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.902102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.902130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.902281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.902307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.902466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.902492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.902637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.902663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.902812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.902839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.902993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.903020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.903144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.903171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.903344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.903370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.903495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.903523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.903644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.903671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.903854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.903887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.904015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.904041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.904188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.904214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.904361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.904387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.904537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.904564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.904714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.904741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.904893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.904921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.905041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.905069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.905187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.905213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.905337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.905367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.905518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.905544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.905658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.905685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.905821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.905847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.906006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.598 [2024-07-13 07:21:09.906033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.598 qpair failed and we were unable to recover it. 00:33:40.598 [2024-07-13 07:21:09.906210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.906237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.906397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.906424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.906540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.906567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.906720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.906747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.906898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.906925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.907051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.907077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.907226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.907253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.907402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.907428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.907580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.907606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.907772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.907800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.907927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.907955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.908067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.908094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.908245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.908272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.908421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.908447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.908598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.908625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.908808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.908834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.909017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.909044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.909193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.909220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.909369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.909396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.909546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.909572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.909691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.909718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.909876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.909903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.910082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.910112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.910257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.910284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.910438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.910464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.910634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.910664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.910798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.910824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.910947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.910972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.911124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.911151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.911274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.911300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.911426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.911453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.911578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.911604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.911721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.911748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.911871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.911900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.912029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.912056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.912183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.912213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.912388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.912415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.912588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.912614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.912753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.912782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.912933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.912960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.913080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-13 07:21:09.913107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-13 07:21:09.913232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.913258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.913388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.913414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.913548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.913575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.913727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.913754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.913908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.913936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.914091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.914117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.914279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.914305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.914438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.914467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.914610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.914639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.914782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.914809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.914961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.914988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.915109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.915136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.915280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.915306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.915420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.915446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.915598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.915624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.915747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.915774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.915971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.915999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.916127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.916155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.916307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.916334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.916482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.916510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.916643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.916674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.916835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.916862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.916988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.917014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.917131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.917157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.917277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.917303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.917452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.917477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.917626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.917652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.917810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.917836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.917992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.918018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.918172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.918198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.918319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.918345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.918493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.918519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.918663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.918689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.918872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.918916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.919062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.919095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.919251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.919277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.919422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.919448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.919613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.919639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.919817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.919844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.919995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.920021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-13 07:21:09.920144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-13 07:21:09.920171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.920336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.920366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.920512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.920539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.920693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.920719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.920902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.920933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.921085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.921112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.921285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.921311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.921433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.921459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.921591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.921618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.921776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.921803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.921928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.921955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.922104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.922131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.922283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.922309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.922434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.922461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.922608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.922634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.922781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.922807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.922936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.922963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.923097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.923124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.923298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.923325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.923504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.923530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.923677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.923703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.923839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.923872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.924024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.924052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.924207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.924234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.924359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.924385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.924533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.924559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.924677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.924703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.924873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.924900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.925022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.925050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.925200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.925226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.925383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.925409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.925558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.925584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.925725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.925754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.925939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.925966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.926116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-13 07:21:09.926147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-13 07:21:09.926293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.926319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.926474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.926500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.926625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.926651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.926770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.926797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.926946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.926973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.927121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.927149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.927302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.927329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.927504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.927532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.927703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.927730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.927852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.927885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.928042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.928069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.928214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.928240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.928388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.928415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.928541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.928570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.928747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.928774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.928928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.928955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.929103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.929129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.929278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.929304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.929457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.929483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.929635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.929663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.929842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.929878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.930029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.930060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.930182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.930210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.930387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.930413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.930566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.930594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.930741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.930767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.930913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.930940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.931063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.931091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.931239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.931265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.931420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.931446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.931593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.931619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.931745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.931772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.931911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.931938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.932060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.932087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.932233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.932259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.932436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.932462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.932589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.932616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.932732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.932759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.932892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.932920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.933046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.933076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-13 07:21:09.933251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-13 07:21:09.933278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.933427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.933453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.933629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.933655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.933804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.933839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.933975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.934000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.934147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.934174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.934298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.934324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.934452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.934479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.934627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.934655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.934780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.934807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.934933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.934960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.935114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.935141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.935263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.935289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.935468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.935494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.935622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.935650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.935801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.935832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.935988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.936015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.936169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.936195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.936368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.936394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.936510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.936536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.936715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.936741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.936859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.936894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.937050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.937078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.937228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.937254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.937383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.937409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.937532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.937558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.937741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.937801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.937966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.937997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.938121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.938152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.938344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.938391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.938554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.938601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.938748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.938784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.938956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.938984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.939104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.939132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.939256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.939299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.939528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.939577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.939720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.939750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.939890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.939917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.940071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.940097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.940258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.940287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.940455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.940484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-13 07:21:09.940688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-13 07:21:09.940717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.940879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.940924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.941079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.941105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.941266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.941295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.941484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.941513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.941673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.941702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.941876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.941903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.942016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.942042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.942171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.942197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.942316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.942342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.942455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.942481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.942600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.942644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.942818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.942847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.943000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.943027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.943199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.943241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.943410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.943439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.943626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.943655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.943816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.943845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.944076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.944103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.944292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.944337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.944525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.944554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.944704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.944743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.944905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.944947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.945068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.945095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.945264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.945293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.945461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.945495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.945650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.945679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.945839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.945872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.946026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.946053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.946191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.946221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.946377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.946406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.946571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.946601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.946760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.946786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.946934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.946961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.947106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.947132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.947282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.947308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.947482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.947511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.947680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.947708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.947912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.947939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.948089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.948116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.948296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-13 07:21:09.948324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-13 07:21:09.948516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.948545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.948725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.948755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.948938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.948964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.949089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.949115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.949262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.949287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.949455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.949484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.949643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.949673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.949863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.949895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.950032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.950058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.950227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.950271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.950410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.950454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.950589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.950619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.950792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.950822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.951007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.951034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.951202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.951231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.951420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.951450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.951667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.951717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.951851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.951902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.952047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.952073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.952200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.952227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.952401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.952444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.952581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.952610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.952774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.952803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.953006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.953034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.953185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.953216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.953393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.953420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.953566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.953595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.953782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.953812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.953988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.954015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.954184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.954213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.954409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.954438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.954636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.954662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.954862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.954897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.955059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.955088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.955261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.955287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.955485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.955514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.955649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.955678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.955843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.955884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.956062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.956091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.956249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.956278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-13 07:21:09.956451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-13 07:21:09.956477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.956601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.956644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.956814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.956840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.957022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.957048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.957219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.957248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.957404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.957433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.957599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.957625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.957815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.957844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.957991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.958020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.958222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.958248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.958417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.958446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.958652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.958679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.958855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.958888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.959061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.959090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.959230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.959258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.959428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.959454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.959598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.959640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.959810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.959836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.959991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.960017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.960215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.960244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.960404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.960433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.960614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.960640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.960839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.960908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.961044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.961070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.961247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.961277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.961396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.961422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.961573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.961615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.961801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.961827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.961978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.962005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.962131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.962177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.962317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.962343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.962535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.962564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.962741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.962767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.962887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.962914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.963063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.963106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.963234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.963262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-13 07:21:09.963397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-13 07:21:09.963424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.963547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.963574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.963753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.963780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.963955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.963982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.964108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.964134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.964282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.964308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.964514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.964541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.964672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.964701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.964846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.964887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.965031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.965058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.965203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.965246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.965443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.965469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.965618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.965643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.965837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.965874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.966044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.966071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.966221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.966247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.966418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.966447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.966600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.966629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.966800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.966826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.966974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.967000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.967168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.967197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.967399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.967425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.967596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.967625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.967786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.967814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.967991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.968018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.968187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.968216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.968405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.968434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.968600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.968626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.968786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.968819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.969022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.969052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.969244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.969270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.969406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.969436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.969561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.969590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.969759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.969785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.969952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.969982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.970155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.970182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.970332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.970358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.970559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.970588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.970755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.970784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.970950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.970977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-13 07:21:09.971144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-13 07:21:09.971173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.971332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.971361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.971532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.971558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.971675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.971717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.971873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.971900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.972061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.972087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.972247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.972276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.972405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.972434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.972602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.972628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.972822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.972850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.973027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.973056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.973229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.973255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.973407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.973434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.973579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.973605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.973781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.973807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.973961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.973987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.974186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.974215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.974394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.974420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.974543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.974590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.974792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.974821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.975041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.975068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.975236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.975264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.975426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.975456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.975651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.975677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.975849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.975885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.976089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.976115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.976266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.976292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.976494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.976523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.976723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.976753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.976884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.976911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.977038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.977066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.977216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.977244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.977366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.977392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.977539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.977566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.977762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.977791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.977959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.977986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.978117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.978143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.978286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.978312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.978490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.978516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.978651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.978680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.978837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-13 07:21:09.978873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-13 07:21:09.979026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.979052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.979227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.979256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.979420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.979449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.979623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.979649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.979816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.979844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.979994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.980023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.980187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.980213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.980365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.980391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.980586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.980612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.980766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.980792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.980920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.980947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.981100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.981127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.981276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.981302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.981493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.981521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.981726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.981752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.981892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.981919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.982086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.982117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.982288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.982314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.982461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.982488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.982658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.982687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.982851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.982887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.983049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.983075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.983204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.983230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.983401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.983431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.983626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.983652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.983819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.983847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.984030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.984056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.984227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.984257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.984379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.984405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.984557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.984584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.984727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.984753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.984950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.984979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.985111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.985139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.985330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.985356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.985553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.985582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.985717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.985746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.985920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.985946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.986113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.986142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.986329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.986358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.986555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.986581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-13 07:21:09.986734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-13 07:21:09.986760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.986913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.986941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.987094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.987120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.987248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.987275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.987400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.987426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.987574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.987600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.987746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.987772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.987894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.987920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.988030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.988057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.988184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.988212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.988361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.988388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.988562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.988588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.988759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.988789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.988931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.988958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.989115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.989141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.989300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.989329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.989494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.989523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.989680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.989706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.989900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.989929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.990094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.990122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.990269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.990295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.990444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.990470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.990691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.990717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.990891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.990918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.991039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.991064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.991240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.991266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.991416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.991441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.991587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.991618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.991792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.991821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.992003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.992030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.992148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.992190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.992322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.992353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.992552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.992578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.992743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.992772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.992929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.992959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-13 07:21:09.993116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-13 07:21:09.993144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.993289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.993316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.993518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.993547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.993712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.993738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.993886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.993929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.994064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.994092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.994248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.994275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.994427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.994469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.994631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.994661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.994835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.994861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.995046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.995074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.995263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.995292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.995459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.995485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.995606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.995632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.995807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.995850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.996017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.996044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.996165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.996192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.996362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.996392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.996533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.996559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.996738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.996767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.996914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.996941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.997088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.997114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.997257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.997300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.997501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.997530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.997699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.997725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.997903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.997958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.998144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.998170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.998291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.998317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.998520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.998549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.998709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.998740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.998921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.998947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.999102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.999136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.999283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.999315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.999495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.999521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.999670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.999697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:09.999843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:09.999896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:10.000068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:10.000105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:10.000298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:10.000327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:10.000489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:10.000517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:10.000695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:10.000733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:10.000907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:10.000947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-13 07:21:10.001135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-13 07:21:10.001174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-13 07:21:10.001369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-13 07:21:10.001404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-13 07:21:10.001569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-13 07:21:10.001607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-13 07:21:10.001776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-13 07:21:10.001813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-13 07:21:10.002016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-13 07:21:10.002053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-13 07:21:10.002237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-13 07:21:10.002273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-13 07:21:10.002444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-13 07:21:10.002495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-13 07:21:10.002696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-13 07:21:10.002729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-13 07:21:10.002902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-13 07:21:10.002936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.895 [2024-07-13 07:21:10.003107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.895 [2024-07-13 07:21:10.003146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.895 qpair failed and we were unable to recover it. 00:33:40.895 [2024-07-13 07:21:10.003323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.895 [2024-07-13 07:21:10.003359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.895 qpair failed and we were unable to recover it. 00:33:40.895 [2024-07-13 07:21:10.003509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.895 [2024-07-13 07:21:10.003546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.895 qpair failed and we were unable to recover it. 00:33:40.895 [2024-07-13 07:21:10.003750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.895 [2024-07-13 07:21:10.003786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.895 qpair failed and we were unable to recover it. 00:33:40.895 [2024-07-13 07:21:10.003925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.003961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.004130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.004181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.004356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.004389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.004583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.004617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.004815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.004853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.005054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.005091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.005300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.005334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.005496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.005535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.005710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.005747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.005940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.005976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.006153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.006188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.006371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.006410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.006585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.006621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.006798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.006835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.007018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.007056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.007250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.007284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.007456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.007494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.007687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.007728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.007912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.007954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.008147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.008185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.008393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.008433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.008603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.008638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.008832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.008879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.009047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.009082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.009226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.009263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.009457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.009491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.009675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.009713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.009981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.010017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.010201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.010241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.010437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.010475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.010702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.010737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.010958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.010997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.011190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.011229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.011429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.011462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.011653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.011693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.011895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.011946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.012144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.012180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.012394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.012432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.012626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.896 [2024-07-13 07:21:10.012664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.896 qpair failed and we were unable to recover it. 00:33:40.896 [2024-07-13 07:21:10.012838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.012883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.013065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.013105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.013282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.013321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.013520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.013555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.013747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.013785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.013960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.013997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.014228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.014263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.014447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.014482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.014635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.014685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.014900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.014946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.015142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.015178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.015373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.015408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.015597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.015631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.015793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.015825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.016019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.016057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.016238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.016284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.016486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.016520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.016739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.016776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.016989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.017023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.017232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.017271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.017423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.017457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.017612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.017661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.017825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.017886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.018044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.018078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.018983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.019023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.019162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.019190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.019327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.019356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.019506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.019533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.019687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.019715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.019843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.019878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.020070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.020098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.020239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.020268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.020442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.020470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.020625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.020653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.020779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.020806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.020963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.020990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.021161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.021188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.021303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.021329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.021482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.021509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.897 [2024-07-13 07:21:10.021640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.897 [2024-07-13 07:21:10.021668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.897 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.021842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.021875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.022033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.022060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.022244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.022287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.022464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.022492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.022659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.022689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.022887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.022925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.023080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.023106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.023272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.023301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.023472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.023498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.023646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.023674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.023826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.023855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.024040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.024067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.024237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.024264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.024442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.024471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.024632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.024661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.024858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.024893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.025069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.025098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.025281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.025307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.025470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.025496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.025682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.025712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.025864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.025909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.026059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.026085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.026266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.026295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.026473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.026500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.026680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.026706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.026854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.026893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.027091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.027117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.027316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.027342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.027490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.027518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.027708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.027736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.027910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.027937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.028107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.028136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.028300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.028329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.028480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.028506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.028644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.028687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.028857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.028894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.029071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.029099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.029267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.029297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.029449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.029478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.898 [2024-07-13 07:21:10.029655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.898 [2024-07-13 07:21:10.029681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.898 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.029823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.029849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.030007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.030036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.030205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.030231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.030355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.030381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.030539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.030568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.030734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.030760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.030940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.030971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.031163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.031192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.031383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.031409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.031534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.031561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.031736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.031763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.031906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.031933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.032098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.032132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.032271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.032300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.032471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.032497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.032651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.032678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.032830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.032857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.032996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.033024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.033153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.033179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.033376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.033410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.033576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.033603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.033801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.033829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.034024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.034050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.034183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.034210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.034359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.034385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.034526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.034553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.034698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.034725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.034918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.034947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.035101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.035135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.035301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.035328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.035448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.035491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.035635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.035662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.035815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.035841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.036005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.036034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.036196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.036225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.036426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.036452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.036646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.036675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.036802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.036832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-13 07:21:10.037013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-13 07:21:10.037039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.037162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.037205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.037333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.037362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.037541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.037568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.037720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.037746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.037914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.037943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.038111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.038149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.038299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.038342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.038508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.038538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.038713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.038739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.038861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.038922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.039054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.039083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.039294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.039320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.039444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.039471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.039623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.039650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.039817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.039845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.040000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.040026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.040146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.040172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.040322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.040348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.040468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.040494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.040711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.040737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.040882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.040923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.041093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.041134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.041279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.041305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.041457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.041483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.041633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.041675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.041838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.041873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.042017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.042043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.042190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.042232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.042358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.042386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.042583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.042610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.042760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.042787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.042967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.042996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.043162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.043189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-13 07:21:10.043309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-13 07:21:10.043335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.043539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.043567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.043761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.043788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.043963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.043990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.044144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.044171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.044332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.044358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.044510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.044536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.044747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.044773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.044930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.044956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.045075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.045101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.045273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.045314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.045479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.045505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.045705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.045732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.045857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.045890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.046041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.046071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.046231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.046257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.046402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.046444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.046608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.046635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.046754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.046797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.046966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.046994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.047129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.047156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.047312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.047340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.047490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.047517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.047679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.047706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.047831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.047879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.048063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.048089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.048220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.048246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.048367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.048393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.048545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.048572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.048717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.048743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.048895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.048926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.049048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.049074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.049202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.049229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.049401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.049428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.049579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.049605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.049753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.049779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.049937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.049963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.050112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.050148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.050298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.050324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.050452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-13 07:21:10.050480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-13 07:21:10.050631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.050657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.050812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.050838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.050987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.051013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.051137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.051164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.051285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.051312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.051456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.051482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.051625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.051652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.051829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.051856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.052023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.052049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.052223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.052249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.052392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.052419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.052569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.052595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.052744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.052771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.052927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.052953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.053066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.053097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.053245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.053272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.053423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.053449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.053611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.053637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.053763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.053790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.053937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.053964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.054085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.054112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.054263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.054289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.054409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.054435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.054581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.054607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.054766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.054792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.054946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.054973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.055133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.055160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.055311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.055337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.055502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.055528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.055704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.055730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.055849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.055881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.056040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.056066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.056216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.056243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.056389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.056415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.056531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.056558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.056697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.056723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.056847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.056879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.057033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.057059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.057190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.057216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.057363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.057389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-13 07:21:10.057537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-13 07:21:10.057563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.057718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.057744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.057896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.057926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.058048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.058073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.058258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.058284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.058403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.058429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.058580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.058606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.058761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.058787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.058915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.058943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.059064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.059090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.059240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.059266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.059419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.059446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.059600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.059627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.059749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.059777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.059904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.059937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.060061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.060087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.060238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.060265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.060439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.060466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.060586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.060613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.060729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.060755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.060909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.060937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.061090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.061118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.061267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.061294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.061455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.061481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.061632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.061657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.061787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.061813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.061974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.062001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.062131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.062157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.062338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.062364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.062509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.062535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.062661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.062688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.062836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.062863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.063033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.063059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.063180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.063207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.063334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.063360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.063508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.063534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.063710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.063736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.063859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.063892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.064039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.064081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.064255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.064283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.064430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-13 07:21:10.064469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-13 07:21:10.064639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.064677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.064805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.064832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.064990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.065018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.065146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.065174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.065325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.065353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.065509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.065536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.065685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.065712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.065832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.065859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.066003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.066030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.066189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.066217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.066370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.066396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.066567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.066594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.066716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.066743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.066919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.066950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.067095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.067132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.067282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.067309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.067460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.067487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.067642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.067669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.067806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.067835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.068023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.068051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.068175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.068202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.068356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.068383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.068526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.068553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.068693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.068720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.068876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.068915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.069038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.069065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.069226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.069256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.069393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.069419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.069574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.069601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.069751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.069779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.069910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.069937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.070087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.070118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.070236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.070262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.070471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.070503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.070679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-13 07:21:10.070706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-13 07:21:10.070830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.070882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.071031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.071057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.071196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.071222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.071347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.071375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.071672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.071724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.071912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.071939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.072115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.072157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.072390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.072418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.072590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.072616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.072760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.072789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.072957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.072983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.073138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.073164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.073332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.073360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.073529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.073555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.073692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.073719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.073877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.073904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.074017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.074043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.074201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.074227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.074390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.074422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.074606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.074634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.074789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.074816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.074950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.074977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.075126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.075153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.075300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.075326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.075477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.075503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.075643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.075669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.075832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.075861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.076021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.076048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.076198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.076243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.076410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.076436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.076639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.076668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.076822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.076850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.077037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.077063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.077214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.077240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.077419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.077451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.077629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.077655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.077802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.077828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.077997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.078024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.078193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.078219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.078428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.078457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.078605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-13 07:21:10.078633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-13 07:21:10.078797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.078824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.078958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.078985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.079136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.079163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.079325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.079352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.079507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.079533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.079658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.079686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.079879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.079924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.080076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.080103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.080341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.080392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.080587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.080613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.080785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.080813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.080967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.080994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.081145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.081171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.081337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.081367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.081518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.081546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.081726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.081753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.081907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.081944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.082096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.082127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.082296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.082322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.082475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.082523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.082682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.082711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.082889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.082923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.083051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.083078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.083267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.083295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.083440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.083467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.083626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.083652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.083804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.083830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.084010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.084037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.084162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.084190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.084332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.084369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.084512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.084538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.084703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.084732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.084893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.084924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.085070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.085098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.085259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.085301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.085467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.085495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.085650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.085676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.085824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.085874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.086054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.086082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.086226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.086252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-13 07:21:10.086443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-13 07:21:10.086472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.086675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.086701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.086900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.086948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.087095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.087121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.087308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.087335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.087515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.087541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.087746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.087775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.087955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.087984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.088146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.088172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.088323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.088350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.088496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.088539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.088714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.088740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.088886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.088925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.089106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.089144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.089293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.089321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.089512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.089552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.089723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.089752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.089895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.089926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.090057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.090083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.090245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.090287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.090456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.090483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.090631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.090657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.090858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.090895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.091087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.091113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.091287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.091315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.091464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.091491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.091659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.091685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.091850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.091915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.092038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.092065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.092197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.092223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.092367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.092393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.092566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.092595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.092762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.092788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.092991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.093019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.093151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.093178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.093371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.093396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.093556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.093584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.093767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.093795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.094022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.094048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.094228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-13 07:21:10.094256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-13 07:21:10.094427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.094456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.094653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.094679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.094843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.094889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.095061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.095087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.095223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.095249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.095397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.095424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.095593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.095622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.095817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.095843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.095979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.096007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.096156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.096183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.096334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.096360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.096541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.096567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.096706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.096735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.096891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.096918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.097072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.097113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.097352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.097381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.097531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.097558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.097747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.097793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.097964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.097991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.098126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.098154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.098299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.098341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.098502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.098528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.098677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.098704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.098917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.098944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.099090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.099116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.099243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.099269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.099460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.099488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.099637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.099664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.099831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.099857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.100010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.100038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.100182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.100208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.100357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.100382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.100556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.100598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.100791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.100820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.101000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.101030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-13 07:21:10.101195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-13 07:21:10.101222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.101383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.101410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.101582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.101608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.101726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.101754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.101948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.101976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.102113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.102139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.102298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.102339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.102509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.102538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.102740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.102777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.102953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.102982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.103154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.103181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1671846 Killed "${NVMF_APP[@]}" "$@" 00:33:40.909 [2024-07-13 07:21:10.103364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.103389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.103566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.103594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:40.909 [2024-07-13 07:21:10.103730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.103760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.103902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:40.909 [2024-07-13 07:21:10.103930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:40.909 [2024-07-13 07:21:10.104077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.104103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:40.909 [2024-07-13 07:21:10.104254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.104283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.104415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.104443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.104631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.104660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.104828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.104878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.105053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.105079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.106066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.106098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.106233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.106261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.106465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.106494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.106647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.106674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.106827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.106857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.107049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.107077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.107255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.107300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.107467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.107497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.107694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.107721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.107878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.107916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.108034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.108060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.108213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.108240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-13 07:21:10.108414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.108447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1672812 00:33:40.909 [2024-07-13 07:21:10.108628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.108657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1672812 00:33:40.909 [2024-07-13 07:21:10.108820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.108849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1672812 ']' 00:33:40.909 [2024-07-13 07:21:10.109001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-13 07:21:10.109031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.910 [2024-07-13 07:21:10.109208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.109252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:40.910 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.910 [2024-07-13 07:21:10.109489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.109519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:40.910 [2024-07-13 07:21:10.109683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:40.910 [2024-07-13 07:21:10.109714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.109864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.109915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.110070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.110096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.110296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.110340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.110583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.110626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.110748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.110777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.110896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.110923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.111091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.111139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.111313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.111358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.111524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.111552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.111723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.111750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.111890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.111927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.112088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.112136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.112292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.112336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.112516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.112561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.112715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.112743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.112875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.112906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.113054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.113081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.113201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.113229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.113357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.113384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.113506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.113533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.113717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.113748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.113948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.113976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.114120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.114149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.114336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.114365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.114517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.114562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.114739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.114767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.114944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.114973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.115102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.115129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.115285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.115329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.116088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.116118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.116273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.116300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.117170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.117200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.117404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.117435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.118198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.118229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.118431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.118460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-13 07:21:10.118588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-13 07:21:10.118617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.118810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.118839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.119013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.119043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.119209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.119237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.119386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.119431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.119581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.119629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.119776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.119818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.119976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.120005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.120126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.120152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.120304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.120330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.120487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.120516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.120654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.120684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.120850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.120889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.121031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.121057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.121202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.121228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.121364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.121393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.121540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.121569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.121704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.121733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.121886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.121914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.122052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.122078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.122246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.122275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.122567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.122615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.122762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.122788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.122920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.122948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.123083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.123109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.123241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.123268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.123422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.123448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.123596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.123625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.123779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.123809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.123960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.123987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.124103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.124129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.124258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.124284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.124407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.124433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.124586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.124612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.124771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.124801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.124941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.124981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.125140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.125167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.125314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.125341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.126096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.126128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.126281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.126311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.126451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.126480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-13 07:21:10.126647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-13 07:21:10.126676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.126818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.126844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.126989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.127016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.127176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.127203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.127400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.127426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.127575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.127603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.127738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.127766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.127947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.127975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.128100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.128126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.128274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.128301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.128459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.128485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.128641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.128669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.128802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.128830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.128981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.129008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.129135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.129162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.129332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.129360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.129499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.129526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.129659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.129688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.129845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.129882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.130025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.130050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.130184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.130210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.130332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.130358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.130480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.130506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.130669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.130696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.130821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.130851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.130999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.131024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.131146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.131172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.131366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.131394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.131571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.131599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.131745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.131771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.131930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.131958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.132087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.132130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.132268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.132312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.132477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.132511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.132671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.132696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-13 07:21:10.132818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-13 07:21:10.132843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.132976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.133004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.133135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.133161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.133284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.133309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.133434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.133459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.133637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.133662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.133785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.133811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.134042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.134067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.134182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.134207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.134353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.134379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.134503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.134528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.134700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.134729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.134884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.134911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.135055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.135081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.135252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.135277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.135399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.135425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.135570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.135595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.135764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.135792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.135930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.135955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.136075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.136100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.136237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.136263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.136392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.136419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.136537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.136562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.136708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.136734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.136912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.136939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.137062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.137087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.137236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.137262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.137405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-13 07:21:10.137431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-13 07:21:10.137554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.137579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.137699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.137724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.137881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.137907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.138048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.138073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.138227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.138258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.139172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.139206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.139392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.139421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.139572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.139601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.139770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.139799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.139974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.140002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.140130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.140161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.140954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.140984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.141110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.141137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.141290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.141317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.141442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.141486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.141630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.141674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.141844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.141890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.142092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.142120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.142282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.142310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.142470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.142498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.142646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.142672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.142809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.142837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.143029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.143057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.143250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.143279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.143439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.143468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.143616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.143643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.143788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.143814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.143952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.143979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.144108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.144133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.144258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.144284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.144413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.144438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.144576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.144601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.144745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.144770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.144906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.144933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.145058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.145083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.145219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.145246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.145384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.145413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.145543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.145571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.145729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.145757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.145894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-13 07:21:10.145938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-13 07:21:10.146070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.146096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.146273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.146301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.146434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.146462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.146602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.146629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.146849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.146891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.147038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.147065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.147188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.147215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.147386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.147415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.147566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.147608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.147761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.147787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.147937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.147967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.148091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.148116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.148245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.148270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.148443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.148494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.148645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.148674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.148836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.148880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.149030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.149057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.149197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.149222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.149375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.149400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.149525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.149550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.149688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.149717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.149893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.149919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.150042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.150084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.150269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.150318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.150474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.150500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.150674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.150702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.150838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.150863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.151042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.151071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.151241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.151269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.151405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.151433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.151595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.151623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.151764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.151790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.151920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.151946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.152090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.152117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.152252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.152277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.152422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.152451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.152603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.152648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.152814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.152842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.153024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.153061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.153267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.153317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.153461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-13 07:21:10.153505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-13 07:21:10.153699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.153726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.153882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.153909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.154062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.154087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.154281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.154328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.154505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.154550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.154687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.154715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.154889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.154916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.155041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.155067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.155231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.155256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.155449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.155499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.155639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.155667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.155802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.155830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.155985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.156011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.156164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.156190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.156347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.156372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.156530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.156559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.156705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.156734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.156879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.156906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.157026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.157051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.157182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.157208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.157380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.157408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.157584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.157626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.157764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.157791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.157924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.157951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.158100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.158126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.158287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.158312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.158336] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:40.916 [2024-07-13 07:21:10.158413] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.916 [2024-07-13 07:21:10.158495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.158521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.158705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.158730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.158877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.158903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.159035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.159059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.159187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.159212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.159370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.159414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.159563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.159590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.159710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.159736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.159900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.159926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.160051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.160077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.160256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.160282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.160433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.160459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.160586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.160611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-13 07:21:10.160788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-13 07:21:10.160814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.160974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.161003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.161164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.161199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.161414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.161442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.161596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.161622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.161744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.161769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.162560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.162593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.162772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.162798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.162961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.162991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.163152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.163190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.163454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.163505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.163681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.163708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.163862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.163918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.164091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.164121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.164274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.164324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.164471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.164497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.164660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.164686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.164818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.164843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.165026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.165057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.165230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.165290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.165450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.165475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.165600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.165625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.165758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.165784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.165946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.165976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.166164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.166214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.166375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.166401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.166551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.166577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.166733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.166759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.166891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.166919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.167041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.167068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.167196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.167222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.167360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.167386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.167545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.167578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.167751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.167777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.167990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-13 07:21:10.168032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-13 07:21:10.168189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.168236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.168433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.168473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.168619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.168646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.168789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.168817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.168989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.169019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.169174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.169217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.169377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.169406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.169577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.169606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.169760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.169790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.169964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.169993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.170167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.170201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.170411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.170449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.170640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.170680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.171002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.171031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.171205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.171247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.171425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.171469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.171708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.171736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.171876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.171921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.172050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.172076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.172202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.172243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.172375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.172402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.172617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.172645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.172776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.172804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.172978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.173004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.173132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.173157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.173280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.173305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.173531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.173557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.173801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.173828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.173997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.174023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.174183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.174208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.174364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.174391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.174538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.174565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.174726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.174754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.174889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.174915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.175035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.175060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.175187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.175213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.175366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.175391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.175516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.175541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.175664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.175690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.175810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.175835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.175969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-13 07:21:10.175995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-13 07:21:10.176137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.176163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.176317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.176345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.176514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.176541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.176673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.176700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.176849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.176906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.177041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.177066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.177193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.177218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.177349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.177374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.177500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.177526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.177656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.177681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.177914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.177940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.178092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.178117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.178236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.178261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.178403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.178434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.178593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.178620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.178751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.178777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.178946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.178972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.179123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.179149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.179328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.179353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.179499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.179527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.179653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.179680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.179807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.179835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.180014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.180040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.180166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.180193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.180320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.180346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.180513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.180541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.180711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.180740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.180898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.180925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.181052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.181078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.181203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.181257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.184389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.184424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.184605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.184634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.184763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.184790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.184944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.184973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.185108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.185135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.185267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.185294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.185424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.185450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.185598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.185626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.185771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.185807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.185964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.185990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-13 07:21:10.186131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-13 07:21:10.186157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.186278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.186304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.186452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.186478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.186685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.186711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.186834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.186863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.187026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.187052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.187176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.187202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.187359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.187386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.187550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.187577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.187766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.187794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.187948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.187975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.188125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.188150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.188388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.188414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.188559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.188588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.188738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.188767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.188949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.188976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.189094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.189120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.189306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.189334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.189537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.189566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.189732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.189758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.189890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.189917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.190056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.190083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.190240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.190269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.190439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.190468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.190608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.190634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.190785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.190810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.190969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.190998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.191159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.191191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.191329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.191358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.191500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.191526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.191654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.191691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.191854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.191900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.192027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.192054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.192194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.192231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.192382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.192410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.192561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.192592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.192740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.192769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.192940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.192967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.193096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.193121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.193270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.193296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.193439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.193468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-13 07:21:10.193648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-13 07:21:10.193675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.193832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.193858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.194006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.194033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.194177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.194204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.194345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.194372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.194501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.194531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.194689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.194726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.194890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.194935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.195065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.195092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.195232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.195260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.195432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.195460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.195635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.195663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.195822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.195852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.195991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.196017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.196139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.196167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.196311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.196338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.196501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.196528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.196675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.196702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.196877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.196903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.197021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.197047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.197172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.197203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.197328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.197353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.197524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.197551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.197680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.197707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.197847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.197892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.198035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.198061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.198230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.198257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.198417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.198445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.921 [2024-07-13 07:21:10.198597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.198625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.198766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.198795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.198948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.198983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.199147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.199211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.199386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.199415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.199571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.199598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.199828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.199855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.200016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.200042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.200191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.200216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.200401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.200428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.200636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.200663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.200945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.200971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.201216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.201244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-13 07:21:10.201423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-13 07:21:10.201451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.201649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.201678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.201816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.201841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.201980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.202006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.202148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.202174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.202397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.202423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.202563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.202606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.202736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.202761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.202887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.202913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.203040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.203065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.203213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.203238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.203413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.203443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.203470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:40.922 [2024-07-13 07:21:10.203566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.203593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.203785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.203811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.203945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.203971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.204119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.204145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.204303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.204329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.204447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.204472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.204617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.204643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.204791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.204816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.204944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.204970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.205122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.205149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.205395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.205420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.205570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.205596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.205775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.205800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.205941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.205967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.206116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.206142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.206326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.206352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.206508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.206533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.206678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.206703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.206850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.206883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.207042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.207068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.207233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.207258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.207394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-13 07:21:10.207418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-13 07:21:10.207577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.207603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.207745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.207785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.207920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.207948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.208132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.208163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.208310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.208335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.208500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.208528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.208707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.208734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.208902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.208929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.209098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.209123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.209302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.209329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.209485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.209511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.209677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.209702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.209878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.209904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.210053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.210079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.210240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.210266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.210411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.210438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.210563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.210590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.210714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.210741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.210921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.210948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.211072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.211097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.211232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.211258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.211393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.211420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.211548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.211574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.211693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.211720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.211878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.211905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.212057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.212083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.212228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.212255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.212382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.212408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.212561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.212587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.212709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.212735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.212895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.212921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.213047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.213074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.213206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.213232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.213399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.213425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.213568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.213595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.213744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.213770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.213899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.213926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.214071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.214097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.214276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.214302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.214429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.214455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-13 07:21:10.214604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-13 07:21:10.214630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.214777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.214803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.214965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.214992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.215132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.215162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.215351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.215377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.215528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.215554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.215705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.215730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.215884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.215911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.216064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.216091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.216214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.216240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.216386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.216412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.216560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.216586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.216732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.216757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.216904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.216932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.217088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.217114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.217246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.217271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.217420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.217447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.217576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.217603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.217778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.217804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.217941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.217969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.218097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.218123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.218275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.218301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.218479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.218504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.218627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.218654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.218774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.218799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.218944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.218971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.219127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.219152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.219297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.219322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.219503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.219529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.219675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.219701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.219831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.219856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.219993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.220021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.220173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.220200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.220319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.220344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.220491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.220517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.220664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.220690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.220837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.220863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.220994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.221022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.221210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.221236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.221374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.221399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.221545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-13 07:21:10.221571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-13 07:21:10.221725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.221751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.221916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.221943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.222067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.222096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.222232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.222259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.222411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.222436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.222587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.222613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.222765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.222791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.222954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.222980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.223104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.223130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.223288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.223315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.223475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.223501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.223631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.223657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.223807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.223834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.223988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.224014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.224191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.224217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.224343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.224370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.224540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.224579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.224740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.224767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.224896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.224923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.225050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.225076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.225195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.225220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.225356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.225383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.225504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.225531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.225675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.225700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.225823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.225850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.225995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.226021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.226149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.226174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.226320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.226346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.226473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.226498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.226637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.226662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.226785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.226812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.226938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.226965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.227114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.227139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.227304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.227330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.227440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.227465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.227643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.227668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.227824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.227849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.227999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.228028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.228187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.228213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.228366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-13 07:21:10.228393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-13 07:21:10.228548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.228574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.228694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.228721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.228850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.228888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.229038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.229064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.229226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.229253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.229403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.229430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.229606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.229632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.229758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.229784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.229922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.229951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.230070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.230096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.230254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.230280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.230402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.230430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.230574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.230602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.230746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.230772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.230917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.230943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.231087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.231113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.231241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.231266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.231417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.231443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.231571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.231598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.231725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.231752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.231914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.231942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.232070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.232097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.232260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.232286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.232440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.232467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.232620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.232646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.232793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.232820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.233011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.233038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.233167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.233193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.233347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.233372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.233441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:40.926 [2024-07-13 07:21:10.233501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.233527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.233704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.233730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.233853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.233907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.234066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.234093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.234310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-13 07:21:10.234336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-13 07:21:10.234474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.234501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.234626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.234652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.234877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.234905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.235138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.235163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.235342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.235368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.235518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.235543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.235690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.235715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.235857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.235892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.236072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.236101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.236247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.236272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.236400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.236426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.236573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.236599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.236717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.236744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.236877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.236903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.237050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.237076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.237229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.237255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.237375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.237401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.237519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.237545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.237698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.237724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.237899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.237928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.238078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.238105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.238266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.238296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.238449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.238475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.238621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.238647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.238791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.238817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.239004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.239031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.239213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.239240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.239391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.239417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.239542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.239567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.239795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.239821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.239985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.240010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.240168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.240194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.240365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.240391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.240506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.240531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.240674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.240700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.240852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.240883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.241111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.241136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.241292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.241317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.241463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.241488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.241663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.241689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-13 07:21:10.241843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-13 07:21:10.241886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.242036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.242061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.242252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.242278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.242446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.242472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.242603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.242629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.242808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.242834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.242993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.243019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.243173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.243199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.243316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.243342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.243494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.243520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.243668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.243694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.243840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.243880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.244035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.244061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.244179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.244204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.244429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.244454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.244579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.244604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.244777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.244803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.244986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.245013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.245141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.245167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.245340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.245365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.245589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.245615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.245769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.245800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.246014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.246040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.246197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.246222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.246451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.246476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.246627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.246653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.246881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.246908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.247089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.247115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.247267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.247293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.247519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.247544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.247696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.247722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.247948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.247974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.248121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.248147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.248296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.248321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.248443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.248469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.248652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.248678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.248798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.248823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.248981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.249007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.249152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.249177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.249331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.249357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.249500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-13 07:21:10.249526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-13 07:21:10.249652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.249679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.249825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.249851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.250082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.250107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.250252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.250278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.250436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.250462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.250686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.250711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.250858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.250889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.251064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.251090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.251264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.251289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.251440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.251466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.251616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.251641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.251812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.251855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.252012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.252041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.252224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.252251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.252404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.252429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.252551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.252577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.252758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.252784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.252913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.252941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.253065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.253091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.253264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.253290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.253437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.253472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.253649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.253674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.253796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.253821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.253964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.253990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.254168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.254193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.254322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.254347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.254469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.254494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.254641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.254666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.254820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.254846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.255003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.255030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.255158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.255184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.255330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.255355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.255497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.255523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.255654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.255680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.255810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.255837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.256000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.256041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.256176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.256204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.256331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.256357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.256487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.256515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.256664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.256689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-13 07:21:10.256835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-13 07:21:10.256861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.257016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.257041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.257170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.257196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.257343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.257369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.257506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.257534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.257686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.257711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.257853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.257884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.258038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.258067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.258211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.258237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.258355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.258381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.258534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.258560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.258738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.258764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.258921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.258947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.259102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.259130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.259276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.259302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.259456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.259481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.259643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.259670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.259822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.259848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.260008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.260033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.260183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.260208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.260330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.260355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.260515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.260540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.260659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.260686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.260805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.260830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.261004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.261030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.261210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.261236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.261388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.261413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.261538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.261563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.261724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.261753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.261907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.261935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.262084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.262110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.262259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.262285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.262463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.262488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.262636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.262663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.262796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.262824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-13 07:21:10.262951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-13 07:21:10.262977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.263128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.263153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.263383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.263409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.263555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.263581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.263708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.263734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.263887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.263916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.264039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.264064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.264219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.264246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.264370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.264396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.264541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.264568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.264714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.264740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.264904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.264930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.265057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.265086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.265235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.265260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.265386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.265412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.265550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.265576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.265700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.265726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.265876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.265902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.266059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.266085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.266210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.266235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.266386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.266411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.266555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.266581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.266732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.266759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.266925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.266965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.267131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.267160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.267317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.267344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.267468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.267494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.267664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.267690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.267857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.267898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.268021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.268046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.268203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.268230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.268354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.268381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.268531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.268557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.268705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.268731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.268856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.268887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.269048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.269076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.269193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.269219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.269343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.269368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.269541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.269566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.269686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.269711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.269832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-13 07:21:10.269857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-13 07:21:10.269999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.270024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.270170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.270196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.270430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.270456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.270633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.270658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.270785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.270810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.270944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.270970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.271091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.271116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.271242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.271267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.271399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.271426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.271605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.271631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.271751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.271776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.271907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.271938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.272120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.272145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.272297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.272322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.272445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.272471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.272618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.272644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.272770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.272796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.272925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.272952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.273092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.273118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.273266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.273292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.273442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.273468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.273584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.273610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.273734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.273759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.273930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.273956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.274076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.274101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.274275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.274300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.274420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.274445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.274570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.274596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.274741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.274767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.274921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.274948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.275101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.275127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.275246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.275272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.275398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.275423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.275577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.275602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.275829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.275855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.276011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.276038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.276164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.276189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.276357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.276382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.276523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.276549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-13 07:21:10.276698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-13 07:21:10.276723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.276838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.276863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.277022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.277047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.277187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.277212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.277356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.277382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.277561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.277587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.277811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.277837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.277964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.277990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.278134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.278160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.278331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.278356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.278475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.278500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.278628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.278653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.278791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.278836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.278984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.279013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.279170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.279196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.279349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.279375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.279550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.279576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.279727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.279752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.279879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.279907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.280042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.280068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.280220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.280246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.280431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.280456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.280604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.280630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.280803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.280828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.281043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.281069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.281250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.281276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.281422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.281448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.281627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.281652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.281772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.281799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.281957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.281983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.282133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.282159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.282280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.282306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.282461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.282487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.282638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.282664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.282858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.282889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.283020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.283045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.283189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.283215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.283443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.283469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.283624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.283649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.283780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.283805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.283979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-13 07:21:10.284006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-13 07:21:10.284155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.284180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.284333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.284358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.284513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.284538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.284718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.284744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.284896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.284922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.285047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.285073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.285224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.285251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.285398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.285424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.285603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.285629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.285786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.285812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.285960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.285986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.286109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.286139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.286306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.286332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.286478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.286503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.286728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.286753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.286954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.286980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.287131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.287157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.287279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.287305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.287445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.287471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.287602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.287627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.287759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.287785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.287915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.287941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.288083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.288108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.288232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.288259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.288423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.288448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.288592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.288617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.288763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.288789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.288914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.288940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.289060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.289086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.289228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.289254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.289409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.289434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.289610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.289635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.289790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.289816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.289980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.290006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.290233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.290258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.290484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.290510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.290662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.290688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.290840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.290871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.291020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.291046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-13 07:21:10.291178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-13 07:21:10.291203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.291356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.291382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.291610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.291636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.291782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.291807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.291963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.291990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.292162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.292188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.292328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.292353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.292488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.292514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.292635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.292660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.292813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.292838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.292999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.293026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.293168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.293194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.293421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.293451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.293571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.293598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.293824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.293850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.294015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.294041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.294218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.294243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.294370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.294395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.294543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.294569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.294719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.294747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.294890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.294916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.295061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.295087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.295232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.295257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.295382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.295407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.295527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.295554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.295732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.295758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.295883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.295909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.296047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.296073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.296251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.296276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.296398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.296423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.296550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.296577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.296728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.296754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.296907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.296933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.297088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.297116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.297236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.297262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-13 07:21:10.297373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-13 07:21:10.297399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.297525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.297552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.297708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.297735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.297888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.297914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.298042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.298068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.298189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.298215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.298388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.298414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.298541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.298566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.298723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.298748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.298876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.298903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.299026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.299052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.299181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.299207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.299370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.299396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.299547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.299573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.299693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.299718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.299945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.299971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.300120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.300147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.300324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.300353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.300502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.300527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.300648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.300675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.300822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.300848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.301019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.301046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.301172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.301198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.301348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.301374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.301520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.301545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.301672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.301698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.301877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.301903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.302054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.302079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.302202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.302227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.302375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.302401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.302530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.302557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.302740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.302766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.302905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.302932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.303067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.303092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.303246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.303272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.303445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.303471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.303617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.303643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.303770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.303796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.303940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.303967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.304116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.304141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.304298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-13 07:21:10.304325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-13 07:21:10.304480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.304505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.304650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.304676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.304810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.304836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.304970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.304997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.305113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.305139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.305289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.305316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.305439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.305465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.305618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.305645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.305812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.305838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.305991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.306017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.306143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.306168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.306295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.306320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.306449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.306474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.306698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.306723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.306912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.306938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.307077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.307103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.307250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.307279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.307401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.307426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.307579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.307604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.307750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.307775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.307902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.307929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.308156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.308183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.308332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.308359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.308535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.308560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.308709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.308735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.308880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.308906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.309034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.309060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.309182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.309208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.309358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.309383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.309514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.309540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.309712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.309737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.309862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.309908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.310059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.310084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.310203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.310228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.310356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.310381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.310537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.310562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.310711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.310736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.310887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.310913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.311035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.311060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.311202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.311227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-13 07:21:10.311355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-13 07:21:10.311380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.311560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.311586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.311734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.311759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.311922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.311948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.312129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.312154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.312333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.312359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.312531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.312556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.312675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.312701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.312818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.312843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.313001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.313027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.313170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.313196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.313345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.313371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.313512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.313537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.313687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.313713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.313838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.313864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.314022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.314047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.314194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.314223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.314345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.314371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.314511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.314537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.314696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.314722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.314876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.314902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.315018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.315044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.315166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.315191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.315311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.315336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.315508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.315534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.315684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.315710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.315831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.315856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.316010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.316036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.316163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.316189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.316315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.316341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.316573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.316599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.316743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.316768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.316918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.316945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.317094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.317119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.317269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.317294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.317409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.317434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.317571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.317597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.317737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.317762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.317910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.317936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.318111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.318136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.318257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-13 07:21:10.318282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-13 07:21:10.318431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.318456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.318570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.318595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.318718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.318743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.318922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.318948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.319173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.319198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.319352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.319378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.319501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.319527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.319752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.319777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.319925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.319951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.320101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.320127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.320300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.320325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.320479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.320504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.320652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.320678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.320852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.320883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.321011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.321037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.321274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.321304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.321462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.321487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.321628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.321654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.321823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.321848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.322016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.322042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.322194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.322220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.322368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.322394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.322520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.322546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.322724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.322750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.322870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.322897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.323017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.323043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.323211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.323243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.323358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.323384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.323509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.323534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.323703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.323729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.323859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.323892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.324014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.324040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-13 07:21:10.324171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-13 07:21:10.324197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-13 07:21:10.324322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-13 07:21:10.324349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-13 07:21:10.324474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-13 07:21:10.324500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-13 07:21:10.324652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-13 07:21:10.324677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-13 07:21:10.324753] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.940 [2024-07-13 07:21:10.324789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.940 [2024-07-13 07:21:10.324794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-13 07:21:10.324803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.940 [2024-07-13 07:21:10.324815] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.940 [2024-07-13 07:21:10.324819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.940 [2024-07-13 07:21:10.324825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-13 07:21:10.324939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-13 07:21:10.324904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:40.940 [2024-07-13 07:21:10.324971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-13 07:21:10.324967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:40.940 [2024-07-13 07:21:10.325013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:40.940 [2024-07-13 07:21:10.325137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-13 07:21:10.325162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-13 07:21:10.325016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:40.940 [2024-07-13 07:21:10.325319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-13 07:21:10.325346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.325511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.325537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.325690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.325715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.325837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.325863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.325997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.326023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.326181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.326207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.326354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.326379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.326535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.326561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.326727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.326752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.326901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.326928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.327052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.327078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.327202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.327227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.327375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.327400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.327514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.327544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.327726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.327751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.327877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.327903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.328063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.328090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.328254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.328279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.328405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.328431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.328582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.328608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.328727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.328752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.328910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.328936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.329056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.329082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.329215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.329241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.329390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.329416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.329529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.329554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.205 qpair failed and we were unable to recover it. 00:33:41.205 [2024-07-13 07:21:10.329674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.205 [2024-07-13 07:21:10.329700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.329872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.329898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.330040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.330066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.330185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.330211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.330357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.330383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.330507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.330532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.330694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.330720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.330827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.330853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.331009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.331035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.331207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.331232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.331378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.331410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.331528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.331554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.331684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.331709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.331833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.331859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.332016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.332042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.332190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.332216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.332360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.332386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.332519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.332544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.332674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.332699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.332826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.332852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.333011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.333037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.333163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.333188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.333345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.333370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.333496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.333522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.333666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.333691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.333808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.333833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.333985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.334011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.334131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.334161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.334284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.334309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.334428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.334454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.334599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.334625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.334768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.334793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.334916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.334942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.335061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.335086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.335216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.335244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.335367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.335394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.335509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.335535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.335652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.335677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.335799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.335824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.206 [2024-07-13 07:21:10.336015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.206 [2024-07-13 07:21:10.336041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.206 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.336168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.336194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.336350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.336375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.336529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.336555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.336689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.336715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.336829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.336854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.337024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.337050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.337167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.337193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.337321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.337347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.337468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.337494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.337642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.337668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.337787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.337812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.337936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.337962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.338088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.338113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.338264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.338289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.338422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.338448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.338571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.338597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.338722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.338749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.338900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.338926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.339044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.339069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.339187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.339212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.339338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.339365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.339490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.339518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.339677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.339703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.339831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.339856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.339990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.340016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.340141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.340168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.340286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.340312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.340460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.340490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.340599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.340625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.340755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.340781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.340910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.340937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.341073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.341098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.341230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.341255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.341396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.341421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.341568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.341593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.341738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.341763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.341889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.341915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.342030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.342055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.342198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.342224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.207 [2024-07-13 07:21:10.342346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.207 [2024-07-13 07:21:10.342373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.207 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.342516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.342541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.342680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.342706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.342826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.342851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.342979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.343006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.343122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.343147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.343265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.343290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.343401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.343426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.343550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.343576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.343695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.343720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.343875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.343901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.344023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.344048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.344164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.344191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.344342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.344367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.344486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.344511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.344636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.344662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.344828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.344854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.345020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.345045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.345167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.345193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.345335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.345361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.345489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.345514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.345634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.345659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.345774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.345799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.345952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.345978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.346098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.346124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.346278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.346303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.346433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.346458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.346567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.346592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.346738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.346767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.346899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.346925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.347048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.347074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.347196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.347223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.347347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.347372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.347494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.347519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.347682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.347708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.347857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.347889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.348009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.348035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.348192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.348218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.348340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.348366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.348525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.348551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.208 [2024-07-13 07:21:10.348668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.208 [2024-07-13 07:21:10.348694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.208 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.348850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.348898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.349027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.349052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.349179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.349205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.349353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.349379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.349505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.349530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.349647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.349673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.349790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.349816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.349939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.349966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.350086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.350112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.350265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.350291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.350438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.350463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.350582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.350608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.350734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.350759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.350902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.350927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.351050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.351076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.351197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.351222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.351369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.351394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.351508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.351533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.351715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.351740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.351885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.351911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.352106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.352132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.352251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.352276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.352486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.352512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.352626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.352651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.352818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.352843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.352997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.353023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.353210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.353235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.353416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.353446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.353564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.353590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.353702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.353727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.353855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.353887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.354038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.354063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.209 [2024-07-13 07:21:10.354227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.209 [2024-07-13 07:21:10.354252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.209 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.354373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.354398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.354522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.354548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.354668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.354693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.354814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.354840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.354984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.355009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.355170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.355195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.355357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.355383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.355515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.355540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.355691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.355716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.355908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.355935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.356084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.356109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.356224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.356249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.356397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.356422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.356531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.356556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.356671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.356697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.356884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.356910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.357044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.357069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.357237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.357262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.357388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.357414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.357540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.357565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.357679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.357704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.357886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.357929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.358091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.358119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.358240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.358267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.358403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.358430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.358548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.358574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.358685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.358711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.358830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.358857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.359021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.359049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.359170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.359195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.359357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.359383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.359553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.359579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.359725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.359750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.359877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.359904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.360032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.360063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.360189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.360215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.360337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.360365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.360518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.360545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.360661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.360688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.360840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.360871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.210 qpair failed and we were unable to recover it. 00:33:41.210 [2024-07-13 07:21:10.360997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.210 [2024-07-13 07:21:10.361024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.361160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.361185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.361311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.361336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.361474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.361499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.361646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.361688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.361819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.361845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.362001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.362040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.362187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.362215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.362352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.362379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.362522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.362548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.362695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.362721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.362852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.362885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.363027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.363054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.363180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.363206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.363329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.363356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.363467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.363493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.363614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.363640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.363757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.363782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.363955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.363994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.364139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.364166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.364306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.364332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.364566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.364594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.364741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.364767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.364918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.364945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.365073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.365099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.365227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.365254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.365385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.365411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.365548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.365574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.365700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.365726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.365863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.365910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.366055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.366094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.366229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.366256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.366409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.366434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.366553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.366579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.366713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.366738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.366864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.366899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.367046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.367071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.367244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.367269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.367400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.367425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.367569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.367594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.367742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.367767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.211 [2024-07-13 07:21:10.367897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.211 [2024-07-13 07:21:10.367923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.211 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.368059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.368084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.368209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.368234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.368380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.368406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.368529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.368555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.368694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.368719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.368879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.368904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.369090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.369129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.369274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.369301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.369417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.369454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.369587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.369613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.369745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.369770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.369912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.369939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.370058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.370084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.370208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.370233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.370348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.370373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.370529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.370556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.370679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.370704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.370837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.370863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.371006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.371032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.371159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.371191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.371327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.371352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.371469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.371495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.371623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.371648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.371781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.371806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.371942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.371969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.372104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.372130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.372267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.372292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.372420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.372445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.372569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.372595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.372708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.372734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.372879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.372918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.373047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.373072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.373276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.373302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.373428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.373454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.373579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.373606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.373714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.373740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.373855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.373886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.374028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.374053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.374170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.374195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.374309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.374334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.212 [2024-07-13 07:21:10.374459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.212 [2024-07-13 07:21:10.374484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.212 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.374609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.374634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.374744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.374770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.374952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.374977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.375096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.375121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.375262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.375287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.375410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.375434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.375561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.375587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.375722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.375758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.375894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.375921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.376052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.376079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.376202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.376235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.376366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.376392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.376510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.376536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.376685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.376710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.376860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.376893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.377007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.377033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.377146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.377181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.377330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.377355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.377473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.377499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.377666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.377705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.377837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.377873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.378002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.378028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.378152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.378177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.378305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.378330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.378454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.378479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.378590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.378615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.378737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.378762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.378880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.378906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.379034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.379060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.379180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.379206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.379335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.379362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.379485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.379511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.379629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.379656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.379823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.379849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.380015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.380041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.380160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.380193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.380315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.380340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-07-13 07:21:10.380459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-07-13 07:21:10.380484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.380634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.380659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.380797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.380822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.380947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.380973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.381133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.381158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.381279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.381305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.381423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.381449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.381581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.381607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.381718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.381743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.381876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.381902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.382036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.382061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.382181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.382207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.382334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.382360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.382478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.382504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.382620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.382645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.382804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.382843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.383003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.383030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.383144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.383170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.383301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.383326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.383491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.383516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.383663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.383687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.383840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.383872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.384009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.384039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.384156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.384182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.384315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.384341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.384496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.384521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.384638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.384663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.384820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.384859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.385016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.385044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.385199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.385224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.385349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.385374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.385495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.385520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.385642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.385669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.385807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.385846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.385997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.386024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.386144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.386170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.386316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.214 [2024-07-13 07:21:10.386342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-07-13 07:21:10.386504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.386531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.386651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.386677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.386800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.386827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.386951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.386976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.387126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.387151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.387276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.387301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.387420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.387444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.387578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.387620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.387785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.387812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.387981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.388008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.388125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.388151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.388285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.388311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.388439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.388464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.388599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.388625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.388803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.388833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.388976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.389004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.389153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.389184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.389338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.389364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.389494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.389520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.389645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.389670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.389872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.389900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.390045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.390071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.390210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.390237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.390360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.390385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.390500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.390526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.390646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.390671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.390790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.390816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.390933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.390959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.391075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.391100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.391219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.391244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.391366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.391391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.391539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.391565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.391719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.391745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.391861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.391892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.392020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.392047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.392162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.392187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.392314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.392339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.392458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.392483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.392599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.392626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.392769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.392795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-07-13 07:21:10.392932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.215 [2024-07-13 07:21:10.392959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.393071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.393096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.393237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.393262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.393426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.393451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.393570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.393596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.393724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.393749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.393899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.393948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.394104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.394143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.394283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.394310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.394507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.394533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.394650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.394675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.394818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.394844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.394981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.395012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.395162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.395188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.395310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.395337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.395454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.395480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.395667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.395692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.395820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.395846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.395994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.396019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.396146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.396171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.396296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.396321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.396431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.396456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.396581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.396606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.396797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.396823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.396977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.397002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.397150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.397176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.397315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.397340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.397486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.397512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.397626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.397651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.397780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.397806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.397963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.397989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.398104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.398130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.398273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.398299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.398420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.398447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.398579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.398619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.398768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.398796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.398953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.398980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.399112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.399138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.399268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.399294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.399421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.399448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.399613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.399639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.216 [2024-07-13 07:21:10.399751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.216 [2024-07-13 07:21:10.399778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f441c000b90 with addr=10.0.0.2, port=4420 00:33:41.216 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.399922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.399962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c2450 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.400095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.400121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.400241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.400266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.400411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.400436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.400560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.400586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.400734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.400759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.400879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.400905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.401023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.401049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.401165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.401190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.401309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.401335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.401504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.401533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.401694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.401719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.401840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.401873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.402011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.402037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.402163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.402197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f442c000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 A controller has encountered a failure and is being reset. 00:33:41.217 [2024-07-13 07:21:10.402374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.402412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.402544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.402572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.402691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.402717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.402837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.402863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.403007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.403033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.403151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.403177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.403328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.403354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.403475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.403501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4424000b90 with addr=10.0.0.2, port=4420 00:33:41.217 qpair failed and we were unable to recover it. 00:33:41.217 [2024-07-13 07:21:10.403651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.217 [2024-07-13 07:21:10.403704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d0480 with addr=10.0.0.2, port=4420 00:33:41.217 [2024-07-13 07:21:10.403725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d0480 is same with the state(5) to be set 00:33:41.217 [2024-07-13 07:21:10.403754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d0480 (9): Bad file descriptor 00:33:41.217 [2024-07-13 07:21:10.403783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:41.217 [2024-07-13 07:21:10.403799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:41.217 [2024-07-13 07:21:10.403816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:41.217 Unable to reset the controller. 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:41.217 Malloc0 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:41.217 [2024-07-13 07:21:10.520914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:41.217 [2024-07-13 07:21:10.549149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.217 07:21:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1671873 00:33:42.154 Controller properly reset. 00:33:47.425 Initializing NVMe Controllers 00:33:47.425 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:47.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:47.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:47.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:47.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:47.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:47.425 Initialization complete. Launching workers. 00:33:47.425 Starting thread on core 1 00:33:47.425 Starting thread on core 2 00:33:47.425 Starting thread on core 3 00:33:47.425 Starting thread on core 0 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:47.425 00:33:47.425 real 0m10.675s 00:33:47.425 user 0m33.151s 00:33:47.425 sys 0m7.455s 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:47.425 ************************************ 00:33:47.425 END TEST nvmf_target_disconnect_tc2 00:33:47.425 ************************************ 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:47.425 rmmod nvme_tcp 00:33:47.425 rmmod nvme_fabrics 00:33:47.425 rmmod nvme_keyring 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1672812 ']' 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1672812 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1672812 ']' 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1672812 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1672812 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1672812' 00:33:47.425 killing process with pid 1672812 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1672812 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1672812 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:47.425 07:21:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.328 07:21:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:49.328 00:33:49.328 real 0m15.469s 00:33:49.328 user 0m58.409s 00:33:49.328 sys 0m9.983s 00:33:49.328 07:21:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:49.328 07:21:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:49.328 ************************************ 00:33:49.328 END TEST nvmf_target_disconnect 00:33:49.328 ************************************ 00:33:49.587 07:21:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:49.588 07:21:18 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:33:49.588 07:21:18 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:49.588 07:21:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.588 07:21:18 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:33:49.588 00:33:49.588 real 27m6.077s 00:33:49.588 user 73m58.474s 00:33:49.588 sys 6m28.459s 00:33:49.588 07:21:18 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:49.588 07:21:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.588 ************************************ 00:33:49.588 END TEST nvmf_tcp 00:33:49.588 ************************************ 00:33:49.588 07:21:18 -- common/autotest_common.sh@1142 -- # return 0 00:33:49.588 07:21:18 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:33:49.588 07:21:18 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:49.588 07:21:18 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:49.588 07:21:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:49.588 07:21:18 -- common/autotest_common.sh@10 -- # set +x 00:33:49.588 ************************************ 00:33:49.588 START TEST spdkcli_nvmf_tcp 00:33:49.588 ************************************ 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:49.588 * Looking for test storage... 00:33:49.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1673981 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1673981 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1673981 ']' 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:49.588 07:21:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.588 [2024-07-13 07:21:18.980581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:49.588 [2024-07-13 07:21:18.980657] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1673981 ] 00:33:49.588 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.588 [2024-07-13 07:21:19.011655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:49.848 [2024-07-13 07:21:19.042379] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:49.848 [2024-07-13 07:21:19.136892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.848 [2024-07-13 07:21:19.136904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.848 07:21:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:49.848 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:49.848 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:49.848 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:49.848 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:49.848 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:49.848 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:49.848 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:49.848 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:49.848 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:49.848 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:49.848 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:49.848 ' 00:33:52.393 [2024-07-13 07:21:21.788493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.771 [2024-07-13 07:21:23.028799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:56.307 [2024-07-13 07:21:25.311980] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:58.232 [2024-07-13 07:21:27.282208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:59.612 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:59.612 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:59.612 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:59.612 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:59.612 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:59.612 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:59.612 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:59.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:59.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:59.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:59.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:59.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:59.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:59.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:59.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:59.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:59.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:59.613 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:59.613 07:21:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:59.613 07:21:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:59.613 07:21:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:59.613 07:21:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:59.613 07:21:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:59.613 07:21:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:59.613 07:21:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:59.613 07:21:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:00.182 07:21:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:00.182 07:21:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:00.182 07:21:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:00.182 07:21:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:00.182 07:21:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.182 07:21:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:00.182 07:21:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:00.182 07:21:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.182 07:21:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:00.182 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:00.182 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:00.182 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:00.182 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:00.182 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:00.182 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:00.182 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:00.182 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:00.182 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:00.182 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:00.182 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:00.182 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:00.182 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:00.182 ' 00:34:05.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:05.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:05.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:05.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:05.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:05.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:05.459 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:05.459 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:05.459 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:05.459 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:05.459 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:05.459 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:05.459 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:05.459 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1673981 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1673981 ']' 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1673981 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1673981 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1673981' 00:34:05.459 killing process with pid 1673981 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1673981 00:34:05.459 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1673981 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1673981 ']' 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1673981 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1673981 ']' 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1673981 00:34:05.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1673981) - No such process 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1673981 is not found' 00:34:05.718 Process with pid 1673981 is not found 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:05.718 00:34:05.718 real 0m16.121s 00:34:05.718 user 0m34.173s 00:34:05.718 sys 0m0.863s 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:05.718 07:21:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:05.718 ************************************ 00:34:05.719 END TEST spdkcli_nvmf_tcp 00:34:05.719 ************************************ 00:34:05.719 07:21:35 -- common/autotest_common.sh@1142 -- # return 0 00:34:05.719 07:21:35 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:05.719 07:21:35 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:05.719 07:21:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:05.719 07:21:35 -- common/autotest_common.sh@10 -- # set +x 00:34:05.719 ************************************ 00:34:05.719 START TEST nvmf_identify_passthru 00:34:05.719 ************************************ 00:34:05.719 07:21:35 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:05.719 * Looking for test storage... 00:34:05.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:05.719 07:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.719 07:21:35 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.719 07:21:35 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.719 07:21:35 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.719 07:21:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.719 07:21:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.719 07:21:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.719 07:21:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:05.719 07:21:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:05.719 07:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.719 07:21:35 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.719 07:21:35 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.719 07:21:35 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.719 07:21:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.719 07:21:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.719 07:21:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.719 07:21:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:05.719 07:21:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.719 07:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.719 07:21:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:05.719 07:21:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:05.719 07:21:35 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:05.719 07:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:07.618 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:07.618 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:07.618 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:07.618 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.618 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:07.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:34:07.877 00:34:07.877 --- 10.0.0.2 ping statistics --- 00:34:07.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.877 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:34:07.877 00:34:07.877 --- 10.0.0.1 ping statistics --- 00:34:07.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.877 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:07.877 07:21:37 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:07.877 07:21:37 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:07.877 07:21:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:34:07.877 07:21:37 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:34:07.877 07:21:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:07.877 07:21:37 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:07.877 07:21:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:07.877 07:21:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:07.877 07:21:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:07.877 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.066 07:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:12.067 07:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:12.067 07:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:12.067 07:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:12.067 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.257 07:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:16.257 07:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:16.257 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:16.257 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:16.257 07:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:16.257 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:16.257 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:16.257 07:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1678590 00:34:16.257 07:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:16.257 07:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:16.257 07:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1678590 00:34:16.257 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1678590 ']' 00:34:16.257 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.257 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:16.257 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.257 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:16.257 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:16.517 [2024-07-13 07:21:45.743003] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:34:16.517 [2024-07-13 07:21:45.743096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.517 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.517 [2024-07-13 07:21:45.781117] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:16.517 [2024-07-13 07:21:45.812337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.517 [2024-07-13 07:21:45.903466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.517 [2024-07-13 07:21:45.903519] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.517 [2024-07-13 07:21:45.903544] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.517 [2024-07-13 07:21:45.903557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.517 [2024-07-13 07:21:45.903570] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.517 [2024-07-13 07:21:45.903655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.517 [2024-07-13 07:21:45.903722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.517 [2024-07-13 07:21:45.903816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:16.517 [2024-07-13 07:21:45.903818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.517 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:16.517 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:34:16.517 07:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:16.517 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.517 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:16.517 INFO: Log level set to 20 00:34:16.517 INFO: Requests: 00:34:16.517 { 00:34:16.517 "jsonrpc": "2.0", 00:34:16.517 "method": "nvmf_set_config", 00:34:16.517 "id": 1, 00:34:16.517 "params": { 00:34:16.517 "admin_cmd_passthru": { 00:34:16.517 "identify_ctrlr": true 00:34:16.517 } 00:34:16.517 } 00:34:16.517 } 00:34:16.517 00:34:16.517 INFO: response: 00:34:16.517 { 00:34:16.517 "jsonrpc": "2.0", 00:34:16.517 "id": 1, 00:34:16.517 "result": true 00:34:16.517 } 00:34:16.517 00:34:16.517 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.517 07:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:16.517 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.517 07:21:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:16.517 INFO: Setting log level to 20 00:34:16.517 INFO: Setting log level to 20 00:34:16.517 INFO: Log level set to 20 00:34:16.517 INFO: Log level set to 20 00:34:16.517 INFO: Requests: 00:34:16.517 { 00:34:16.517 "jsonrpc": "2.0", 00:34:16.517 "method": "framework_start_init", 00:34:16.517 "id": 1 00:34:16.517 } 00:34:16.517 00:34:16.517 INFO: Requests: 00:34:16.517 { 00:34:16.517 "jsonrpc": "2.0", 00:34:16.517 "method": "framework_start_init", 00:34:16.517 "id": 1 00:34:16.517 } 00:34:16.517 00:34:16.776 [2024-07-13 07:21:46.068229] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:16.776 INFO: response: 00:34:16.776 { 00:34:16.776 "jsonrpc": "2.0", 00:34:16.776 "id": 1, 00:34:16.776 "result": true 00:34:16.776 } 00:34:16.776 00:34:16.776 INFO: response: 00:34:16.776 { 00:34:16.776 "jsonrpc": "2.0", 00:34:16.776 "id": 1, 00:34:16.776 "result": true 00:34:16.776 } 00:34:16.776 00:34:16.776 07:21:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.776 07:21:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:16.776 07:21:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.776 07:21:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:16.776 INFO: Setting log level to 40 00:34:16.776 INFO: Setting log level to 40 00:34:16.776 INFO: Setting log level to 40 00:34:16.776 [2024-07-13 07:21:46.078334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.776 07:21:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.776 07:21:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:16.776 07:21:46 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:16.776 07:21:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:16.776 07:21:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:16.776 07:21:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.776 07:21:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:20.099 Nvme0n1 00:34:20.099 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.099 07:21:48 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:20.099 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.099 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:20.099 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.099 07:21:48 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:20.099 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.099 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:20.099 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.099 07:21:48 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.099 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.099 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:20.099 [2024-07-13 07:21:48.968918] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.099 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.099 07:21:48 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:20.100 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.100 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:20.100 [ 00:34:20.100 { 00:34:20.100 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:20.100 "subtype": "Discovery", 00:34:20.100 "listen_addresses": [], 00:34:20.100 "allow_any_host": true, 00:34:20.100 "hosts": [] 00:34:20.100 }, 00:34:20.100 { 00:34:20.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:20.100 "subtype": "NVMe", 00:34:20.100 "listen_addresses": [ 00:34:20.100 { 00:34:20.100 "trtype": "TCP", 00:34:20.100 "adrfam": "IPv4", 00:34:20.100 "traddr": "10.0.0.2", 00:34:20.100 "trsvcid": "4420" 00:34:20.100 } 00:34:20.100 ], 00:34:20.100 "allow_any_host": true, 00:34:20.100 "hosts": [], 00:34:20.100 "serial_number": "SPDK00000000000001", 00:34:20.100 "model_number": "SPDK bdev Controller", 00:34:20.100 "max_namespaces": 1, 00:34:20.100 "min_cntlid": 1, 00:34:20.100 "max_cntlid": 65519, 00:34:20.100 "namespaces": [ 00:34:20.100 { 00:34:20.100 "nsid": 1, 00:34:20.100 "bdev_name": "Nvme0n1", 00:34:20.100 "name": "Nvme0n1", 00:34:20.100 "nguid": "276F3F82EC08424C9609BC2601D3008C", 00:34:20.100 "uuid": "276f3f82-ec08-424c-9609-bc2601d3008c" 00:34:20.100 } 00:34:20.100 ] 00:34:20.100 } 00:34:20.100 ] 00:34:20.100 07:21:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.100 07:21:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:20.100 07:21:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:20.100 07:21:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:20.100 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.100 07:21:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:20.100 07:21:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:20.100 07:21:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:20.100 07:21:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:20.100 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.100 07:21:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:20.100 07:21:49 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:20.100 07:21:49 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:20.100 07:21:49 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.100 07:21:49 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:20.100 07:21:49 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:20.100 rmmod nvme_tcp 00:34:20.100 rmmod nvme_fabrics 00:34:20.100 rmmod nvme_keyring 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1678590 ']' 00:34:20.100 07:21:49 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1678590 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1678590 ']' 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1678590 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1678590 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1678590' 00:34:20.100 killing process with pid 1678590 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1678590 00:34:20.100 07:21:49 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1678590 00:34:21.477 07:21:50 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:21.477 07:21:50 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:21.477 07:21:50 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:21.477 07:21:50 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:21.477 07:21:50 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:21.477 07:21:50 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.477 07:21:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:21.477 07:21:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.007 07:21:52 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:24.007 00:34:24.007 real 0m17.866s 00:34:24.007 user 0m26.342s 00:34:24.007 sys 0m2.201s 00:34:24.007 07:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:24.007 07:21:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:24.007 ************************************ 00:34:24.007 END TEST nvmf_identify_passthru 00:34:24.007 ************************************ 00:34:24.007 07:21:52 -- common/autotest_common.sh@1142 -- # return 0 00:34:24.007 07:21:52 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:24.007 07:21:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:24.007 07:21:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:24.007 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:34:24.007 ************************************ 00:34:24.007 START TEST nvmf_dif 00:34:24.007 ************************************ 00:34:24.007 07:21:52 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:24.007 * Looking for test storage... 00:34:24.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:24.007 07:21:52 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.007 07:21:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.007 07:21:53 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.007 07:21:53 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.007 07:21:53 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.007 07:21:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.007 07:21:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.007 07:21:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.007 07:21:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:24.007 07:21:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:24.007 07:21:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:24.007 07:21:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:24.007 07:21:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:24.007 07:21:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:24.007 07:21:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.007 07:21:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:24.007 07:21:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:24.007 07:21:53 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:24.007 07:21:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:25.912 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:25.912 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:25.912 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:25.912 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:25.912 07:21:54 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.912 07:21:55 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.912 07:21:55 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.912 07:21:55 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:25.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:34:25.912 00:34:25.912 --- 10.0.0.2 ping statistics --- 00:34:25.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.912 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:34:25.912 07:21:55 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:34:25.912 00:34:25.912 --- 10.0.0.1 ping statistics --- 00:34:25.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.912 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:34:25.912 07:21:55 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.912 07:21:55 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:25.912 07:21:55 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:25.912 07:21:55 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:26.848 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:26.848 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:26.848 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:26.848 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:26.848 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:26.848 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:26.848 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:26.848 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:26.848 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:26.848 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:26.848 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:26.848 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:26.848 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:26.848 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:26.848 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:26.848 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:26.848 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:27.109 07:21:56 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:27.109 07:21:56 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:27.109 07:21:56 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:27.109 07:21:56 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:27.109 07:21:56 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:27.109 07:21:56 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:27.109 07:21:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:27.109 07:21:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:27.109 07:21:56 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:27.109 07:21:56 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:27.109 07:21:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.109 07:21:56 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1681735 00:34:27.109 07:21:56 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:27.109 07:21:56 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1681735 00:34:27.109 07:21:56 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1681735 ']' 00:34:27.109 07:21:56 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.109 07:21:56 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:27.109 07:21:56 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.109 07:21:56 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:27.109 07:21:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.109 [2024-07-13 07:21:56.422248] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:34:27.109 [2024-07-13 07:21:56.422332] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.109 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.109 [2024-07-13 07:21:56.467134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:27.109 [2024-07-13 07:21:56.499553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.378 [2024-07-13 07:21:56.594304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:27.378 [2024-07-13 07:21:56.594375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:27.378 [2024-07-13 07:21:56.594393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:27.378 [2024-07-13 07:21:56.594407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:27.378 [2024-07-13 07:21:56.594420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:27.378 [2024-07-13 07:21:56.594452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.378 07:21:56 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:27.378 07:21:56 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:34:27.378 07:21:56 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:27.378 07:21:56 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:27.378 07:21:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.378 07:21:56 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.378 07:21:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:27.378 07:21:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:27.378 07:21:56 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.378 07:21:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.379 [2024-07-13 07:21:56.746363] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.379 07:21:56 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.379 07:21:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:27.379 07:21:56 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:27.379 07:21:56 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:27.379 07:21:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.379 ************************************ 00:34:27.379 START TEST fio_dif_1_default 00:34:27.379 ************************************ 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:27.379 bdev_null0 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:27.379 [2024-07-13 07:21:56.806663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:27.379 { 00:34:27.379 "params": { 00:34:27.379 "name": "Nvme$subsystem", 00:34:27.379 "trtype": "$TEST_TRANSPORT", 00:34:27.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:27.379 "adrfam": "ipv4", 00:34:27.379 "trsvcid": "$NVMF_PORT", 00:34:27.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:27.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:27.379 "hdgst": ${hdgst:-false}, 00:34:27.379 "ddgst": ${ddgst:-false} 00:34:27.379 }, 00:34:27.379 "method": "bdev_nvme_attach_controller" 00:34:27.379 } 00:34:27.379 EOF 00:34:27.379 )") 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:27.379 07:21:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:27.379 "params": { 00:34:27.379 "name": "Nvme0", 00:34:27.379 "trtype": "tcp", 00:34:27.379 "traddr": "10.0.0.2", 00:34:27.379 "adrfam": "ipv4", 00:34:27.379 "trsvcid": "4420", 00:34:27.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:27.379 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:27.379 "hdgst": false, 00:34:27.379 "ddgst": false 00:34:27.379 }, 00:34:27.379 "method": "bdev_nvme_attach_controller" 00:34:27.379 }' 00:34:27.637 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:27.637 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:27.637 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:27.637 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:27.637 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:27.637 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:27.637 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:27.637 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:27.637 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:27.637 07:21:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.637 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:27.637 fio-3.35 00:34:27.637 Starting 1 thread 00:34:27.895 EAL: No free 2048 kB hugepages reported on node 1 00:34:40.112 00:34:40.112 filename0: (groupid=0, jobs=1): err= 0: pid=1681964: Sat Jul 13 07:22:07 2024 00:34:40.112 read: IOPS=142, BW=569KiB/s (583kB/s)(5696KiB/10002msec) 00:34:40.112 slat (nsec): min=4878, max=57439, avg=9570.66, stdev=3291.88 00:34:40.112 clat (usec): min=684, max=45643, avg=28064.39, stdev=18841.82 00:34:40.112 lat (usec): min=692, max=45659, avg=28073.96, stdev=18841.86 00:34:40.112 clat percentiles (usec): 00:34:40.112 | 1.00th=[ 725], 5.00th=[ 766], 10.00th=[ 783], 20.00th=[ 816], 00:34:40.112 | 30.00th=[ 848], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:40.112 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:40.112 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:34:40.112 | 99.99th=[45876] 00:34:40.112 bw ( KiB/s): min= 384, max= 768, per=100.00%, avg=574.32, stdev=184.90, samples=19 00:34:40.112 iops : min= 96, max= 192, avg=143.58, stdev=46.22, samples=19 00:34:40.112 lat (usec) : 750=3.51%, 1000=28.79% 00:34:40.112 lat (msec) : 50=67.70% 00:34:40.112 cpu : usr=89.46%, sys=10.25%, ctx=22, majf=0, minf=238 00:34:40.112 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.112 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.112 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:40.112 00:34:40.112 Run status group 0 (all jobs): 00:34:40.112 READ: bw=569KiB/s (583kB/s), 569KiB/s-569KiB/s (583kB/s-583kB/s), io=5696KiB (5833kB), run=10002-10002msec 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.112 00:34:40.112 real 0m11.159s 00:34:40.112 user 0m10.191s 00:34:40.112 sys 0m1.291s 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:40.112 ************************************ 00:34:40.112 END TEST fio_dif_1_default 00:34:40.112 ************************************ 00:34:40.112 07:22:07 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:40.112 07:22:07 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:40.112 07:22:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:40.112 07:22:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:40.112 07:22:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:40.112 ************************************ 00:34:40.112 START TEST fio_dif_1_multi_subsystems 00:34:40.112 ************************************ 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.112 bdev_null0 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.112 07:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.112 [2024-07-13 07:22:08.019527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.112 bdev_null1 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:40.112 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:40.113 { 00:34:40.113 "params": { 00:34:40.113 "name": "Nvme$subsystem", 00:34:40.113 "trtype": "$TEST_TRANSPORT", 00:34:40.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.113 "adrfam": "ipv4", 00:34:40.113 "trsvcid": "$NVMF_PORT", 00:34:40.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.113 "hdgst": ${hdgst:-false}, 00:34:40.113 "ddgst": ${ddgst:-false} 00:34:40.113 }, 00:34:40.113 "method": "bdev_nvme_attach_controller" 00:34:40.113 } 00:34:40.113 EOF 00:34:40.113 )") 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:40.113 { 00:34:40.113 "params": { 00:34:40.113 "name": "Nvme$subsystem", 00:34:40.113 "trtype": "$TEST_TRANSPORT", 00:34:40.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.113 "adrfam": "ipv4", 00:34:40.113 "trsvcid": "$NVMF_PORT", 00:34:40.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.113 "hdgst": ${hdgst:-false}, 00:34:40.113 "ddgst": ${ddgst:-false} 00:34:40.113 }, 00:34:40.113 "method": "bdev_nvme_attach_controller" 00:34:40.113 } 00:34:40.113 EOF 00:34:40.113 )") 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:40.113 "params": { 00:34:40.113 "name": "Nvme0", 00:34:40.113 "trtype": "tcp", 00:34:40.113 "traddr": "10.0.0.2", 00:34:40.113 "adrfam": "ipv4", 00:34:40.113 "trsvcid": "4420", 00:34:40.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:40.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:40.113 "hdgst": false, 00:34:40.113 "ddgst": false 00:34:40.113 }, 00:34:40.113 "method": "bdev_nvme_attach_controller" 00:34:40.113 },{ 00:34:40.113 "params": { 00:34:40.113 "name": "Nvme1", 00:34:40.113 "trtype": "tcp", 00:34:40.113 "traddr": "10.0.0.2", 00:34:40.113 "adrfam": "ipv4", 00:34:40.113 "trsvcid": "4420", 00:34:40.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:40.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:40.113 "hdgst": false, 00:34:40.113 "ddgst": false 00:34:40.113 }, 00:34:40.113 "method": "bdev_nvme_attach_controller" 00:34:40.113 }' 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:40.113 07:22:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.113 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:40.113 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:40.113 fio-3.35 00:34:40.113 Starting 2 threads 00:34:40.113 EAL: No free 2048 kB hugepages reported on node 1 00:34:50.109 00:34:50.109 filename0: (groupid=0, jobs=1): err= 0: pid=1683361: Sat Jul 13 07:22:19 2024 00:34:50.109 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10001msec) 00:34:50.109 slat (nsec): min=7013, max=60100, avg=10760.25, stdev=5805.85 00:34:50.109 clat (usec): min=710, max=43725, avg=21019.55, stdev=20186.68 00:34:50.109 lat (usec): min=718, max=43761, avg=21030.31, stdev=20185.13 00:34:50.110 clat percentiles (usec): 00:34:50.110 | 1.00th=[ 734], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 783], 00:34:50.110 | 30.00th=[ 799], 40.00th=[ 824], 50.00th=[41157], 60.00th=[41157], 00:34:50.110 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:50.110 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:34:50.110 | 99.99th=[43779] 00:34:50.110 bw ( KiB/s): min= 672, max= 768, per=50.01%, avg=759.58, stdev=23.47, samples=19 00:34:50.110 iops : min= 168, max= 192, avg=189.89, stdev= 5.87, samples=19 00:34:50.110 lat (usec) : 750=3.32%, 1000=46.37% 00:34:50.110 lat (msec) : 2=0.21%, 50=50.11% 00:34:50.110 cpu : usr=97.67%, sys=2.05%, ctx=14, majf=0, minf=196 00:34:50.110 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.110 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.110 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:50.110 filename1: (groupid=0, jobs=1): err= 0: pid=1683362: Sat Jul 13 07:22:19 2024 00:34:50.110 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10005msec) 00:34:50.110 slat (nsec): min=4557, max=47995, avg=11807.95, stdev=6146.42 00:34:50.110 clat (usec): min=704, max=45698, avg=21069.71, stdev=20126.99 00:34:50.110 lat (usec): min=712, max=45744, avg=21081.52, stdev=20125.27 00:34:50.110 clat percentiles (usec): 00:34:50.110 | 1.00th=[ 758], 5.00th=[ 791], 10.00th=[ 816], 20.00th=[ 840], 00:34:50.110 | 30.00th=[ 873], 40.00th=[ 914], 50.00th=[40633], 60.00th=[41157], 00:34:50.110 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:50.110 | 99.00th=[41681], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:34:50.110 | 99.99th=[45876] 00:34:50.110 bw ( KiB/s): min= 672, max= 768, per=49.81%, avg=756.80, stdev=26.01, samples=20 00:34:50.110 iops : min= 168, max= 192, avg=189.20, stdev= 6.50, samples=20 00:34:50.110 lat (usec) : 750=0.63%, 1000=48.52% 00:34:50.110 lat (msec) : 2=0.63%, 50=50.21% 00:34:50.110 cpu : usr=97.05%, sys=2.67%, ctx=10, majf=0, minf=42 00:34:50.110 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.110 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.110 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:50.110 00:34:50.110 Run status group 0 (all jobs): 00:34:50.110 READ: bw=1518KiB/s (1554kB/s), 758KiB/s-760KiB/s (776kB/s-778kB/s), io=14.8MiB (15.5MB), run=10001-10005msec 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.110 00:34:50.110 real 0m11.400s 00:34:50.110 user 0m20.834s 00:34:50.110 sys 0m0.799s 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:50.110 07:22:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:50.110 ************************************ 00:34:50.110 END TEST fio_dif_1_multi_subsystems 00:34:50.110 ************************************ 00:34:50.110 07:22:19 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:50.110 07:22:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:50.110 07:22:19 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:50.110 07:22:19 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:50.110 07:22:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:50.110 ************************************ 00:34:50.110 START TEST fio_dif_rand_params 00:34:50.110 ************************************ 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.110 bdev_null0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.110 [2024-07-13 07:22:19.472233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:50.110 { 00:34:50.110 "params": { 00:34:50.110 "name": "Nvme$subsystem", 00:34:50.110 "trtype": "$TEST_TRANSPORT", 00:34:50.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.110 "adrfam": "ipv4", 00:34:50.110 "trsvcid": "$NVMF_PORT", 00:34:50.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.110 "hdgst": ${hdgst:-false}, 00:34:50.110 "ddgst": ${ddgst:-false} 00:34:50.110 }, 00:34:50.110 "method": "bdev_nvme_attach_controller" 00:34:50.110 } 00:34:50.110 EOF 00:34:50.110 )") 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.110 07:22:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:50.111 "params": { 00:34:50.111 "name": "Nvme0", 00:34:50.111 "trtype": "tcp", 00:34:50.111 "traddr": "10.0.0.2", 00:34:50.111 "adrfam": "ipv4", 00:34:50.111 "trsvcid": "4420", 00:34:50.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:50.111 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:50.111 "hdgst": false, 00:34:50.111 "ddgst": false 00:34:50.111 }, 00:34:50.111 "method": "bdev_nvme_attach_controller" 00:34:50.111 }' 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:50.111 07:22:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.368 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:50.368 ... 00:34:50.368 fio-3.35 00:34:50.368 Starting 3 threads 00:34:50.368 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.919 00:34:56.919 filename0: (groupid=0, jobs=1): err= 0: pid=1684765: Sat Jul 13 07:22:25 2024 00:34:56.919 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(141MiB/5006msec) 00:34:56.919 slat (nsec): min=5002, max=81840, avg=16562.42, stdev=5743.49 00:34:56.919 clat (usec): min=4996, max=88598, avg=13336.94, stdev=10764.01 00:34:56.919 lat (usec): min=5010, max=88616, avg=13353.50, stdev=10764.42 00:34:56.919 clat percentiles (usec): 00:34:56.919 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6849], 20.00th=[ 8455], 00:34:56.919 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[11731], 00:34:56.919 | 70.00th=[12518], 80.00th=[13304], 90.00th=[15139], 95.00th=[50070], 00:34:56.919 | 99.00th=[53216], 99.50th=[54789], 99.90th=[56886], 99.95th=[88605], 00:34:56.919 | 99.99th=[88605] 00:34:56.919 bw ( KiB/s): min=24064, max=32512, per=35.35%, avg=28697.60, stdev=2755.75, samples=10 00:34:56.919 iops : min= 188, max= 254, avg=224.20, stdev=21.53, samples=10 00:34:56.919 lat (msec) : 10=41.46%, 20=51.69%, 50=1.87%, 100=4.98% 00:34:56.919 cpu : usr=92.43%, sys=7.13%, ctx=14, majf=0, minf=127 00:34:56.919 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.919 issued rwts: total=1124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.919 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:56.919 filename0: (groupid=0, jobs=1): err= 0: pid=1684766: Sat Jul 13 07:22:25 2024 00:34:56.919 read: IOPS=225, BW=28.1MiB/s (29.5MB/s)(141MiB/5004msec) 00:34:56.919 slat (nsec): min=4569, max=84788, avg=14402.73, stdev=5383.50 00:34:56.919 clat (usec): min=5187, max=57366, avg=13310.56, stdev=10687.43 00:34:56.919 lat (usec): min=5199, max=57379, avg=13324.96, stdev=10687.54 00:34:56.920 clat percentiles (usec): 00:34:56.920 | 1.00th=[ 5669], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 8094], 00:34:56.920 | 30.00th=[ 8848], 40.00th=[ 9765], 50.00th=[10945], 60.00th=[11731], 00:34:56.920 | 70.00th=[12649], 80.00th=[13829], 90.00th=[15926], 95.00th=[50070], 00:34:56.920 | 99.00th=[53740], 99.50th=[54789], 99.90th=[56361], 99.95th=[57410], 00:34:56.920 | 99.99th=[57410] 00:34:56.920 bw ( KiB/s): min=21248, max=37888, per=35.44%, avg=28774.40, stdev=6765.81, samples=10 00:34:56.920 iops : min= 166, max= 296, avg=224.80, stdev=52.86, samples=10 00:34:56.920 lat (msec) : 10=41.65%, 20=51.69%, 50=1.51%, 100=5.15% 00:34:56.920 cpu : usr=92.46%, sys=7.08%, ctx=14, majf=0, minf=108 00:34:56.920 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.920 issued rwts: total=1126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:56.920 filename0: (groupid=0, jobs=1): err= 0: pid=1684767: Sat Jul 13 07:22:25 2024 00:34:56.920 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(119MiB/5045msec) 00:34:56.920 slat (nsec): min=3921, max=49451, avg=14770.94, stdev=4998.25 00:34:56.920 clat (usec): min=5442, max=56982, avg=15867.33, stdev=12919.07 00:34:56.920 lat (usec): min=5455, max=56996, avg=15882.10, stdev=12918.93 00:34:56.920 clat percentiles (usec): 00:34:56.920 | 1.00th=[ 5866], 5.00th=[ 6652], 10.00th=[ 8356], 20.00th=[ 9110], 00:34:56.920 | 30.00th=[ 9896], 40.00th=[11076], 50.00th=[11994], 60.00th=[12780], 00:34:56.920 | 70.00th=[13566], 80.00th=[14615], 90.00th=[48497], 95.00th=[51643], 00:34:56.920 | 99.00th=[54264], 99.50th=[55837], 99.90th=[56886], 99.95th=[56886], 00:34:56.920 | 99.99th=[56886] 00:34:56.920 bw ( KiB/s): min=18688, max=28928, per=29.87%, avg=24248.30, stdev=3588.31, samples=10 00:34:56.920 iops : min= 146, max= 226, avg=189.40, stdev=28.02, samples=10 00:34:56.920 lat (msec) : 10=30.84%, 20=57.89%, 50=3.37%, 100=7.89% 00:34:56.920 cpu : usr=92.57%, sys=7.02%, ctx=12, majf=0, minf=123 00:34:56.920 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.920 issued rwts: total=950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:56.920 00:34:56.920 Run status group 0 (all jobs): 00:34:56.920 READ: bw=79.3MiB/s (83.1MB/s), 23.5MiB/s-28.1MiB/s (24.7MB/s-29.5MB/s), io=400MiB (419MB), run=5004-5045msec 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 bdev_null0 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 [2024-07-13 07:22:25.693251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 bdev_null1 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 bdev_null2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:56.920 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:56.920 { 00:34:56.920 "params": { 00:34:56.920 "name": "Nvme$subsystem", 00:34:56.920 "trtype": "$TEST_TRANSPORT", 00:34:56.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.920 "adrfam": "ipv4", 00:34:56.920 "trsvcid": "$NVMF_PORT", 00:34:56.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.920 "hdgst": ${hdgst:-false}, 00:34:56.920 "ddgst": ${ddgst:-false} 00:34:56.920 }, 00:34:56.920 "method": "bdev_nvme_attach_controller" 00:34:56.920 } 00:34:56.920 EOF 00:34:56.920 )") 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:56.921 { 00:34:56.921 "params": { 00:34:56.921 "name": "Nvme$subsystem", 00:34:56.921 "trtype": "$TEST_TRANSPORT", 00:34:56.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.921 "adrfam": "ipv4", 00:34:56.921 "trsvcid": "$NVMF_PORT", 00:34:56.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.921 "hdgst": ${hdgst:-false}, 00:34:56.921 "ddgst": ${ddgst:-false} 00:34:56.921 }, 00:34:56.921 "method": "bdev_nvme_attach_controller" 00:34:56.921 } 00:34:56.921 EOF 00:34:56.921 )") 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:56.921 { 00:34:56.921 "params": { 00:34:56.921 "name": "Nvme$subsystem", 00:34:56.921 "trtype": "$TEST_TRANSPORT", 00:34:56.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.921 "adrfam": "ipv4", 00:34:56.921 "trsvcid": "$NVMF_PORT", 00:34:56.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.921 "hdgst": ${hdgst:-false}, 00:34:56.921 "ddgst": ${ddgst:-false} 00:34:56.921 }, 00:34:56.921 "method": "bdev_nvme_attach_controller" 00:34:56.921 } 00:34:56.921 EOF 00:34:56.921 )") 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:56.921 "params": { 00:34:56.921 "name": "Nvme0", 00:34:56.921 "trtype": "tcp", 00:34:56.921 "traddr": "10.0.0.2", 00:34:56.921 "adrfam": "ipv4", 00:34:56.921 "trsvcid": "4420", 00:34:56.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:56.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:56.921 "hdgst": false, 00:34:56.921 "ddgst": false 00:34:56.921 }, 00:34:56.921 "method": "bdev_nvme_attach_controller" 00:34:56.921 },{ 00:34:56.921 "params": { 00:34:56.921 "name": "Nvme1", 00:34:56.921 "trtype": "tcp", 00:34:56.921 "traddr": "10.0.0.2", 00:34:56.921 "adrfam": "ipv4", 00:34:56.921 "trsvcid": "4420", 00:34:56.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:56.921 "hdgst": false, 00:34:56.921 "ddgst": false 00:34:56.921 }, 00:34:56.921 "method": "bdev_nvme_attach_controller" 00:34:56.921 },{ 00:34:56.921 "params": { 00:34:56.921 "name": "Nvme2", 00:34:56.921 "trtype": "tcp", 00:34:56.921 "traddr": "10.0.0.2", 00:34:56.921 "adrfam": "ipv4", 00:34:56.921 "trsvcid": "4420", 00:34:56.921 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:56.921 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:56.921 "hdgst": false, 00:34:56.921 "ddgst": false 00:34:56.921 }, 00:34:56.921 "method": "bdev_nvme_attach_controller" 00:34:56.921 }' 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:56.921 07:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.921 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:56.921 ... 00:34:56.921 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:56.921 ... 00:34:56.921 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:56.921 ... 00:34:56.921 fio-3.35 00:34:56.921 Starting 24 threads 00:34:56.921 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.119 00:35:09.119 filename0: (groupid=0, jobs=1): err= 0: pid=1685619: Sat Jul 13 07:22:36 2024 00:35:09.119 read: IOPS=71, BW=284KiB/s (291kB/s)(2864KiB/10084msec) 00:35:09.120 slat (nsec): min=7887, max=89993, avg=20251.70, stdev=15398.63 00:35:09.120 clat (msec): min=93, max=412, avg=224.25, stdev=44.05 00:35:09.120 lat (msec): min=93, max=412, avg=224.27, stdev=44.06 00:35:09.120 clat percentiles (msec): 00:35:09.120 | 1.00th=[ 94], 5.00th=[ 144], 10.00th=[ 176], 20.00th=[ 201], 00:35:09.120 | 30.00th=[ 211], 40.00th=[ 218], 50.00th=[ 222], 60.00th=[ 234], 00:35:09.120 | 70.00th=[ 234], 80.00th=[ 264], 90.00th=[ 288], 95.00th=[ 300], 00:35:09.120 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 414], 99.95th=[ 414], 00:35:09.120 | 99.99th=[ 414] 00:35:09.120 bw ( KiB/s): min= 128, max= 384, per=4.59%, avg=280.00, stdev=57.92, samples=20 00:35:09.120 iops : min= 32, max= 96, avg=70.00, stdev=14.48, samples=20 00:35:09.120 lat (msec) : 100=2.23%, 250=75.42%, 500=22.35% 00:35:09.120 cpu : usr=97.76%, sys=1.67%, ctx=84, majf=0, minf=29 00:35:09.120 IO depths : 1=1.7%, 2=5.6%, 4=17.9%, 8=64.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:35:09.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 issued rwts: total=716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.120 filename0: (groupid=0, jobs=1): err= 0: pid=1685620: Sat Jul 13 07:22:36 2024 00:35:09.120 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10069msec) 00:35:09.120 slat (usec): min=8, max=144, avg=33.02, stdev=13.71 00:35:09.120 clat (msec): min=139, max=374, avg=287.38, stdev=47.10 00:35:09.120 lat (msec): min=139, max=374, avg=287.41, stdev=47.10 00:35:09.120 clat percentiles (msec): 00:35:09.120 | 1.00th=[ 169], 5.00th=[ 197], 10.00th=[ 228], 20.00th=[ 249], 00:35:09.120 | 30.00th=[ 266], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 309], 00:35:09.120 | 70.00th=[ 313], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 368], 00:35:09.120 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 376], 99.95th=[ 376], 00:35:09.120 | 99.99th=[ 376] 00:35:09.120 bw ( KiB/s): min= 128, max= 256, per=3.56%, avg=217.55, stdev=53.51, samples=20 00:35:09.120 iops : min= 32, max= 64, avg=54.35, stdev=13.35, samples=20 00:35:09.120 lat (msec) : 250=20.36%, 500=79.64% 00:35:09.120 cpu : usr=96.98%, sys=2.08%, ctx=54, majf=0, minf=26 00:35:09.120 IO depths : 1=2.3%, 2=8.6%, 4=25.0%, 8=53.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:35:09.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.120 filename0: (groupid=0, jobs=1): err= 0: pid=1685621: Sat Jul 13 07:22:36 2024 00:35:09.120 read: IOPS=54, BW=217KiB/s (222kB/s)(2176KiB/10027msec) 00:35:09.120 slat (nsec): min=8611, max=84588, avg=33238.14, stdev=17374.91 00:35:09.120 clat (msec): min=174, max=430, avg=294.58, stdev=50.91 00:35:09.120 lat (msec): min=174, max=430, avg=294.62, stdev=50.90 00:35:09.120 clat percentiles (msec): 00:35:09.120 | 1.00th=[ 176], 5.00th=[ 205], 10.00th=[ 226], 20.00th=[ 247], 00:35:09.120 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 313], 00:35:09.120 | 70.00th=[ 321], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 380], 00:35:09.120 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 430], 99.95th=[ 430], 00:35:09.120 | 99.99th=[ 430] 00:35:09.120 bw ( KiB/s): min= 128, max= 384, per=3.46%, avg=211.20, stdev=73.89, samples=20 00:35:09.120 iops : min= 32, max= 96, avg=52.80, stdev=18.47, samples=20 00:35:09.120 lat (msec) : 250=21.32%, 500=78.68% 00:35:09.120 cpu : usr=97.98%, sys=1.60%, ctx=11, majf=0, minf=31 00:35:09.120 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:09.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.120 filename0: (groupid=0, jobs=1): err= 0: pid=1685622: Sat Jul 13 07:22:36 2024 00:35:09.120 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10076msec) 00:35:09.120 slat (usec): min=8, max=150, avg=32.55, stdev=24.71 00:35:09.120 clat (msec): min=138, max=374, avg=279.60, stdev=59.09 00:35:09.120 lat (msec): min=138, max=374, avg=279.64, stdev=59.08 00:35:09.120 clat percentiles (msec): 00:35:09.120 | 1.00th=[ 140], 5.00th=[ 146], 10.00th=[ 192], 20.00th=[ 230], 00:35:09.120 | 30.00th=[ 255], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 309], 00:35:09.120 | 70.00th=[ 317], 80.00th=[ 326], 90.00th=[ 342], 95.00th=[ 359], 00:35:09.120 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:35:09.120 | 99.99th=[ 376] 00:35:09.120 bw ( KiB/s): min= 128, max= 368, per=3.66%, avg=224.00, stdev=69.26, samples=20 00:35:09.120 iops : min= 32, max= 92, avg=56.00, stdev=17.31, samples=20 00:35:09.120 lat (msec) : 250=28.12%, 500=71.88% 00:35:09.120 cpu : usr=97.66%, sys=1.59%, ctx=32, majf=0, minf=21 00:35:09.120 IO depths : 1=4.5%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:09.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.120 filename0: (groupid=0, jobs=1): err= 0: pid=1685623: Sat Jul 13 07:22:36 2024 00:35:09.120 read: IOPS=79, BW=316KiB/s (324kB/s)(3192KiB/10099msec) 00:35:09.120 slat (usec): min=5, max=160, avg=19.99, stdev=19.33 00:35:09.120 clat (msec): min=23, max=272, avg=202.11, stdev=45.75 00:35:09.120 lat (msec): min=23, max=272, avg=202.13, stdev=45.74 00:35:09.120 clat percentiles (msec): 00:35:09.120 | 1.00th=[ 24], 5.00th=[ 100], 10.00th=[ 142], 20.00th=[ 184], 00:35:09.120 | 30.00th=[ 201], 40.00th=[ 209], 50.00th=[ 215], 60.00th=[ 220], 00:35:09.120 | 70.00th=[ 224], 80.00th=[ 234], 90.00th=[ 239], 95.00th=[ 241], 00:35:09.120 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:35:09.120 | 99.99th=[ 271] 00:35:09.120 bw ( KiB/s): min= 256, max= 512, per=5.12%, avg=312.80, stdev=72.95, samples=20 00:35:09.120 iops : min= 64, max= 128, avg=78.20, stdev=18.24, samples=20 00:35:09.120 lat (msec) : 50=2.01%, 100=4.01%, 250=89.72%, 500=4.26% 00:35:09.120 cpu : usr=97.75%, sys=1.53%, ctx=48, majf=0, minf=26 00:35:09.120 IO depths : 1=2.6%, 2=8.9%, 4=25.1%, 8=53.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:35:09.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 issued rwts: total=798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.120 filename0: (groupid=0, jobs=1): err= 0: pid=1685624: Sat Jul 13 07:22:36 2024 00:35:09.120 read: IOPS=63, BW=255KiB/s (261kB/s)(2560KiB/10047msec) 00:35:09.120 slat (nsec): min=8319, max=91299, avg=25094.85, stdev=15699.29 00:35:09.120 clat (msec): min=132, max=435, avg=250.96, stdev=47.69 00:35:09.120 lat (msec): min=132, max=435, avg=250.99, stdev=47.69 00:35:09.120 clat percentiles (msec): 00:35:09.120 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 201], 20.00th=[ 211], 00:35:09.120 | 30.00th=[ 224], 40.00th=[ 234], 50.00th=[ 239], 60.00th=[ 264], 00:35:09.120 | 70.00th=[ 288], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 321], 00:35:09.120 | 99.00th=[ 338], 99.50th=[ 422], 99.90th=[ 435], 99.95th=[ 435], 00:35:09.120 | 99.99th=[ 435] 00:35:09.120 bw ( KiB/s): min= 128, max= 384, per=4.09%, avg=249.60, stdev=48.53, samples=20 00:35:09.120 iops : min= 32, max= 96, avg=62.40, stdev=12.13, samples=20 00:35:09.120 lat (msec) : 250=58.13%, 500=41.88% 00:35:09.120 cpu : usr=98.09%, sys=1.50%, ctx=18, majf=0, minf=23 00:35:09.120 IO depths : 1=2.8%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:35:09.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.120 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.120 filename0: (groupid=0, jobs=1): err= 0: pid=1685625: Sat Jul 13 07:22:36 2024 00:35:09.120 read: IOPS=63, BW=255KiB/s (261kB/s)(2560KiB/10050msec) 00:35:09.121 slat (usec): min=8, max=124, avg=29.97, stdev=22.94 00:35:09.121 clat (msec): min=110, max=413, avg=251.00, stdev=46.67 00:35:09.121 lat (msec): min=110, max=413, avg=251.03, stdev=46.68 00:35:09.121 clat percentiles (msec): 00:35:09.121 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 203], 20.00th=[ 211], 00:35:09.121 | 30.00th=[ 226], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 264], 00:35:09.121 | 70.00th=[ 288], 80.00th=[ 296], 90.00th=[ 317], 95.00th=[ 321], 00:35:09.121 | 99.00th=[ 347], 99.50th=[ 397], 99.90th=[ 414], 99.95th=[ 414], 00:35:09.121 | 99.99th=[ 414] 00:35:09.121 bw ( KiB/s): min= 128, max= 368, per=4.09%, avg=249.60, stdev=47.12, samples=20 00:35:09.121 iops : min= 32, max= 92, avg=62.40, stdev=11.78, samples=20 00:35:09.121 lat (msec) : 250=58.44%, 500=41.56% 00:35:09.121 cpu : usr=97.78%, sys=1.69%, ctx=33, majf=0, minf=29 00:35:09.121 IO depths : 1=2.3%, 2=8.4%, 4=24.5%, 8=54.5%, 16=10.2%, 32=0.0%, >=64=0.0% 00:35:09.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.121 filename0: (groupid=0, jobs=1): err= 0: pid=1685626: Sat Jul 13 07:22:36 2024 00:35:09.121 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10071msec) 00:35:09.121 slat (nsec): min=8518, max=94853, avg=32588.64, stdev=15764.51 00:35:09.121 clat (msec): min=137, max=457, avg=287.48, stdev=55.50 00:35:09.121 lat (msec): min=137, max=457, avg=287.51, stdev=55.49 00:35:09.121 clat percentiles (msec): 00:35:09.121 | 1.00th=[ 169], 5.00th=[ 194], 10.00th=[ 203], 20.00th=[ 234], 00:35:09.121 | 30.00th=[ 266], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 309], 00:35:09.121 | 70.00th=[ 313], 80.00th=[ 321], 90.00th=[ 351], 95.00th=[ 372], 00:35:09.121 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 460], 99.95th=[ 460], 00:35:09.121 | 99.99th=[ 460] 00:35:09.121 bw ( KiB/s): min= 128, max= 256, per=3.56%, avg=217.60, stdev=58.59, samples=20 00:35:09.121 iops : min= 32, max= 64, avg=54.40, stdev=14.65, samples=20 00:35:09.121 lat (msec) : 250=23.21%, 500=76.79% 00:35:09.121 cpu : usr=97.92%, sys=1.61%, ctx=16, majf=0, minf=32 00:35:09.121 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:09.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.121 filename1: (groupid=0, jobs=1): err= 0: pid=1685627: Sat Jul 13 07:22:36 2024 00:35:09.121 read: IOPS=55, BW=223KiB/s (228kB/s)(2240KiB/10067msec) 00:35:09.121 slat (nsec): min=8870, max=59718, avg=30555.36, stdev=9575.16 00:35:09.121 clat (msec): min=148, max=453, avg=287.34, stdev=49.58 00:35:09.121 lat (msec): min=148, max=453, avg=287.37, stdev=49.58 00:35:09.121 clat percentiles (msec): 00:35:09.121 | 1.00th=[ 169], 5.00th=[ 197], 10.00th=[ 226], 20.00th=[ 249], 00:35:09.121 | 30.00th=[ 266], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 309], 00:35:09.121 | 70.00th=[ 313], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 368], 00:35:09.121 | 99.00th=[ 376], 99.50th=[ 447], 99.90th=[ 456], 99.95th=[ 456], 00:35:09.121 | 99.99th=[ 456] 00:35:09.121 bw ( KiB/s): min= 128, max= 272, per=3.56%, avg=217.60, stdev=58.82, samples=20 00:35:09.121 iops : min= 32, max= 68, avg=54.40, stdev=14.71, samples=20 00:35:09.121 lat (msec) : 250=20.71%, 500=79.29% 00:35:09.121 cpu : usr=97.94%, sys=1.61%, ctx=17, majf=0, minf=25 00:35:09.121 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:35:09.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.121 filename1: (groupid=0, jobs=1): err= 0: pid=1685628: Sat Jul 13 07:22:36 2024 00:35:09.121 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10075msec) 00:35:09.121 slat (usec): min=8, max=122, avg=53.04, stdev=29.61 00:35:09.121 clat (msec): min=125, max=448, avg=287.40, stdev=50.35 00:35:09.121 lat (msec): min=125, max=448, avg=287.45, stdev=50.34 00:35:09.121 clat percentiles (msec): 00:35:09.121 | 1.00th=[ 142], 5.00th=[ 197], 10.00th=[ 211], 20.00th=[ 249], 00:35:09.121 | 30.00th=[ 268], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 309], 00:35:09.121 | 70.00th=[ 313], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 372], 00:35:09.121 | 99.00th=[ 372], 99.50th=[ 418], 99.90th=[ 447], 99.95th=[ 447], 00:35:09.121 | 99.99th=[ 447] 00:35:09.121 bw ( KiB/s): min= 128, max= 256, per=3.56%, avg=217.60, stdev=58.59, samples=20 00:35:09.121 iops : min= 32, max= 64, avg=54.40, stdev=14.65, samples=20 00:35:09.121 lat (msec) : 250=20.71%, 500=79.29% 00:35:09.121 cpu : usr=97.36%, sys=1.88%, ctx=41, majf=0, minf=31 00:35:09.121 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:09.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.121 filename1: (groupid=0, jobs=1): err= 0: pid=1685629: Sat Jul 13 07:22:36 2024 00:35:09.121 read: IOPS=71, BW=284KiB/s (291kB/s)(2872KiB/10097msec) 00:35:09.121 slat (usec): min=6, max=234, avg=25.37, stdev=26.02 00:35:09.121 clat (msec): min=91, max=359, avg=224.48, stdev=44.58 00:35:09.121 lat (msec): min=91, max=359, avg=224.51, stdev=44.58 00:35:09.121 clat percentiles (msec): 00:35:09.121 | 1.00th=[ 91], 5.00th=[ 161], 10.00th=[ 180], 20.00th=[ 201], 00:35:09.121 | 30.00th=[ 207], 40.00th=[ 218], 50.00th=[ 226], 60.00th=[ 234], 00:35:09.121 | 70.00th=[ 239], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 300], 00:35:09.121 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:35:09.121 | 99.99th=[ 359] 00:35:09.121 bw ( KiB/s): min= 128, max= 384, per=4.59%, avg=280.80, stdev=54.81, samples=20 00:35:09.121 iops : min= 32, max= 96, avg=70.20, stdev=13.70, samples=20 00:35:09.121 lat (msec) : 100=4.32%, 250=71.03%, 500=24.65% 00:35:09.121 cpu : usr=96.33%, sys=2.28%, ctx=77, majf=0, minf=31 00:35:09.121 IO depths : 1=2.5%, 2=5.4%, 4=14.8%, 8=67.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:35:09.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 complete : 0=0.0%, 4=91.0%, 8=3.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 issued rwts: total=718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.121 filename1: (groupid=0, jobs=1): err= 0: pid=1685630: Sat Jul 13 07:22:36 2024 00:35:09.121 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10074msec) 00:35:09.121 slat (nsec): min=8161, max=90765, avg=27595.98, stdev=22870.04 00:35:09.121 clat (msec): min=129, max=451, avg=286.56, stdev=59.94 00:35:09.121 lat (msec): min=129, max=451, avg=286.59, stdev=59.93 00:35:09.121 clat percentiles (msec): 00:35:09.121 | 1.00th=[ 142], 5.00th=[ 169], 10.00th=[ 197], 20.00th=[ 234], 00:35:09.121 | 30.00th=[ 257], 40.00th=[ 284], 50.00th=[ 296], 60.00th=[ 313], 00:35:09.121 | 70.00th=[ 317], 80.00th=[ 326], 90.00th=[ 351], 95.00th=[ 380], 00:35:09.121 | 99.00th=[ 430], 99.50th=[ 435], 99.90th=[ 451], 99.95th=[ 451], 00:35:09.121 | 99.99th=[ 451] 00:35:09.121 bw ( KiB/s): min= 128, max= 368, per=3.56%, avg=217.60, stdev=67.56, samples=20 00:35:09.121 iops : min= 32, max= 92, avg=54.40, stdev=16.89, samples=20 00:35:09.121 lat (msec) : 250=26.07%, 500=73.93% 00:35:09.121 cpu : usr=97.55%, sys=1.69%, ctx=69, majf=0, minf=32 00:35:09.121 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:09.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.121 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.121 filename1: (groupid=0, jobs=1): err= 0: pid=1685631: Sat Jul 13 07:22:36 2024 00:35:09.121 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10090msec) 00:35:09.121 slat (nsec): min=4915, max=91510, avg=27522.84, stdev=20464.98 00:35:09.121 clat (msec): min=141, max=430, avg=257.55, stdev=52.43 00:35:09.121 lat (msec): min=141, max=430, avg=257.58, stdev=52.43 00:35:09.121 clat percentiles (msec): 00:35:09.121 | 1.00th=[ 171], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 213], 00:35:09.122 | 30.00th=[ 226], 40.00th=[ 236], 50.00th=[ 241], 60.00th=[ 271], 00:35:09.122 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 321], 95.00th=[ 342], 00:35:09.122 | 99.00th=[ 351], 99.50th=[ 414], 99.90th=[ 430], 99.95th=[ 430], 00:35:09.122 | 99.99th=[ 430] 00:35:09.122 bw ( KiB/s): min= 128, max= 368, per=3.99%, avg=243.20, stdev=52.07, samples=20 00:35:09.122 iops : min= 32, max= 92, avg=60.80, stdev=13.02, samples=20 00:35:09.122 lat (msec) : 250=51.92%, 500=48.08% 00:35:09.122 cpu : usr=97.86%, sys=1.60%, ctx=23, majf=0, minf=24 00:35:09.122 IO depths : 1=3.0%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:35:09.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.122 filename1: (groupid=0, jobs=1): err= 0: pid=1685632: Sat Jul 13 07:22:36 2024 00:35:09.122 read: IOPS=84, BW=339KiB/s (347kB/s)(3424KiB/10101msec) 00:35:09.122 slat (usec): min=4, max=104, avg=13.48, stdev=12.47 00:35:09.122 clat (msec): min=22, max=331, avg=188.41, stdev=48.58 00:35:09.122 lat (msec): min=22, max=331, avg=188.42, stdev=48.57 00:35:09.122 clat percentiles (msec): 00:35:09.122 | 1.00th=[ 23], 5.00th=[ 95], 10.00th=[ 138], 20.00th=[ 155], 00:35:09.122 | 30.00th=[ 169], 40.00th=[ 182], 50.00th=[ 201], 60.00th=[ 211], 00:35:09.122 | 70.00th=[ 218], 80.00th=[ 224], 90.00th=[ 230], 95.00th=[ 241], 00:35:09.122 | 99.00th=[ 305], 99.50th=[ 313], 99.90th=[ 330], 99.95th=[ 330], 00:35:09.122 | 99.99th=[ 330] 00:35:09.122 bw ( KiB/s): min= 224, max= 624, per=5.50%, avg=336.00, stdev=89.76, samples=20 00:35:09.122 iops : min= 56, max= 156, avg=84.00, stdev=22.44, samples=20 00:35:09.122 lat (msec) : 50=3.74%, 100=1.87%, 250=90.42%, 500=3.97% 00:35:09.122 cpu : usr=97.76%, sys=1.72%, ctx=22, majf=0, minf=61 00:35:09.122 IO depths : 1=0.6%, 2=2.3%, 4=11.3%, 8=73.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:09.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 complete : 0=0.0%, 4=90.2%, 8=4.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 issued rwts: total=856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.122 filename1: (groupid=0, jobs=1): err= 0: pid=1685633: Sat Jul 13 07:22:36 2024 00:35:09.122 read: IOPS=64, BW=259KiB/s (265kB/s)(2616KiB/10092msec) 00:35:09.122 slat (usec): min=8, max=137, avg=32.04, stdev=23.96 00:35:09.122 clat (msec): min=138, max=436, avg=246.50, stdev=48.21 00:35:09.122 lat (msec): min=138, max=436, avg=246.53, stdev=48.21 00:35:09.122 clat percentiles (msec): 00:35:09.122 | 1.00th=[ 140], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 211], 00:35:09.122 | 30.00th=[ 220], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 241], 00:35:09.122 | 70.00th=[ 271], 80.00th=[ 288], 90.00th=[ 317], 95.00th=[ 330], 00:35:09.122 | 99.00th=[ 334], 99.50th=[ 426], 99.90th=[ 435], 99.95th=[ 435], 00:35:09.122 | 99.99th=[ 435] 00:35:09.122 bw ( KiB/s): min= 128, max= 384, per=4.18%, avg=255.20, stdev=57.45, samples=20 00:35:09.122 iops : min= 32, max= 96, avg=63.80, stdev=14.36, samples=20 00:35:09.122 lat (msec) : 250=61.16%, 500=38.84% 00:35:09.122 cpu : usr=97.78%, sys=1.44%, ctx=18, majf=0, minf=31 00:35:09.122 IO depths : 1=4.6%, 2=10.9%, 4=25.1%, 8=51.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:09.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.122 filename1: (groupid=0, jobs=1): err= 0: pid=1685634: Sat Jul 13 07:22:36 2024 00:35:09.122 read: IOPS=73, BW=292KiB/s (299kB/s)(2952KiB/10094msec) 00:35:09.122 slat (nsec): min=5405, max=77503, avg=18047.28, stdev=13437.98 00:35:09.122 clat (msec): min=104, max=318, avg=218.37, stdev=44.06 00:35:09.122 lat (msec): min=104, max=318, avg=218.38, stdev=44.06 00:35:09.122 clat percentiles (msec): 00:35:09.122 | 1.00th=[ 105], 5.00th=[ 142], 10.00th=[ 150], 20.00th=[ 186], 00:35:09.122 | 30.00th=[ 203], 40.00th=[ 213], 50.00th=[ 218], 60.00th=[ 228], 00:35:09.122 | 70.00th=[ 236], 80.00th=[ 264], 90.00th=[ 288], 95.00th=[ 288], 00:35:09.122 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:35:09.122 | 99.99th=[ 317] 00:35:09.122 bw ( KiB/s): min= 128, max= 384, per=4.73%, avg=288.80, stdev=65.55, samples=20 00:35:09.122 iops : min= 32, max= 96, avg=72.20, stdev=16.39, samples=20 00:35:09.122 lat (msec) : 250=78.59%, 500=21.41% 00:35:09.122 cpu : usr=97.87%, sys=1.64%, ctx=28, majf=0, minf=28 00:35:09.122 IO depths : 1=1.9%, 2=6.2%, 4=19.1%, 8=62.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:35:09.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.122 filename2: (groupid=0, jobs=1): err= 0: pid=1685635: Sat Jul 13 07:22:36 2024 00:35:09.122 read: IOPS=67, BW=271KiB/s (278kB/s)(2736KiB/10092msec) 00:35:09.122 slat (usec): min=5, max=280, avg=20.91, stdev=18.08 00:35:09.122 clat (msec): min=137, max=378, avg=234.68, stdev=39.89 00:35:09.122 lat (msec): min=137, max=378, avg=234.70, stdev=39.90 00:35:09.122 clat percentiles (msec): 00:35:09.122 | 1.00th=[ 142], 5.00th=[ 174], 10.00th=[ 186], 20.00th=[ 207], 00:35:09.122 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 228], 60.00th=[ 236], 00:35:09.122 | 70.00th=[ 241], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 300], 00:35:09.122 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:35:09.122 | 99.99th=[ 380] 00:35:09.122 bw ( KiB/s): min= 144, max= 384, per=4.45%, avg=271.20, stdev=49.38, samples=20 00:35:09.122 iops : min= 36, max= 96, avg=67.80, stdev=12.34, samples=20 00:35:09.122 lat (msec) : 250=70.47%, 500=29.53% 00:35:09.122 cpu : usr=96.77%, sys=2.26%, ctx=36, majf=0, minf=26 00:35:09.122 IO depths : 1=1.5%, 2=5.1%, 4=17.0%, 8=65.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:09.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 issued rwts: total=684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.122 filename2: (groupid=0, jobs=1): err= 0: pid=1685636: Sat Jul 13 07:22:36 2024 00:35:09.122 read: IOPS=56, BW=228KiB/s (233kB/s)(2296KiB/10083msec) 00:35:09.122 slat (nsec): min=5297, max=75097, avg=21545.89, stdev=12029.81 00:35:09.122 clat (msec): min=117, max=475, avg=280.72, stdev=63.81 00:35:09.122 lat (msec): min=117, max=475, avg=280.74, stdev=63.80 00:35:09.122 clat percentiles (msec): 00:35:09.122 | 1.00th=[ 140], 5.00th=[ 176], 10.00th=[ 199], 20.00th=[ 228], 00:35:09.122 | 30.00th=[ 249], 40.00th=[ 275], 50.00th=[ 292], 60.00th=[ 309], 00:35:09.122 | 70.00th=[ 317], 80.00th=[ 326], 90.00th=[ 359], 95.00th=[ 372], 00:35:09.122 | 99.00th=[ 464], 99.50th=[ 477], 99.90th=[ 477], 99.95th=[ 477], 00:35:09.122 | 99.99th=[ 477] 00:35:09.122 bw ( KiB/s): min= 128, max= 368, per=3.66%, avg=223.20, stdev=76.73, samples=20 00:35:09.122 iops : min= 32, max= 92, avg=55.80, stdev=19.18, samples=20 00:35:09.122 lat (msec) : 250=32.06%, 500=67.94% 00:35:09.122 cpu : usr=97.90%, sys=1.49%, ctx=20, majf=0, minf=30 00:35:09.122 IO depths : 1=1.9%, 2=8.2%, 4=25.1%, 8=54.4%, 16=10.5%, 32=0.0%, >=64=0.0% 00:35:09.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.122 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.122 filename2: (groupid=0, jobs=1): err= 0: pid=1685637: Sat Jul 13 07:22:36 2024 00:35:09.122 read: IOPS=80, BW=324KiB/s (332kB/s)(3272KiB/10099msec) 00:35:09.122 slat (usec): min=4, max=120, avg=22.30, stdev=23.94 00:35:09.122 clat (msec): min=21, max=393, avg=196.63, stdev=53.49 00:35:09.122 lat (msec): min=21, max=393, avg=196.65, stdev=53.49 00:35:09.122 clat percentiles (msec): 00:35:09.122 | 1.00th=[ 22], 5.00th=[ 90], 10.00th=[ 138], 20.00th=[ 163], 00:35:09.122 | 30.00th=[ 186], 40.00th=[ 199], 50.00th=[ 207], 60.00th=[ 215], 00:35:09.122 | 70.00th=[ 220], 80.00th=[ 230], 90.00th=[ 234], 95.00th=[ 241], 00:35:09.122 | 99.00th=[ 351], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:35:09.123 | 99.99th=[ 393] 00:35:09.123 bw ( KiB/s): min= 240, max= 640, per=5.25%, avg=320.80, stdev=93.36, samples=20 00:35:09.123 iops : min= 60, max= 160, avg=80.20, stdev=23.34, samples=20 00:35:09.123 lat (msec) : 50=2.81%, 100=3.06%, 250=89.73%, 500=4.40% 00:35:09.123 cpu : usr=97.91%, sys=1.56%, ctx=28, majf=0, minf=28 00:35:09.123 IO depths : 1=1.3%, 2=5.9%, 4=19.6%, 8=61.9%, 16=11.4%, 32=0.0%, >=64=0.0% 00:35:09.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 issued rwts: total=818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.123 filename2: (groupid=0, jobs=1): err= 0: pid=1685638: Sat Jul 13 07:22:36 2024 00:35:09.123 read: IOPS=70, BW=281KiB/s (288kB/s)(2832KiB/10084msec) 00:35:09.123 slat (usec): min=8, max=181, avg=21.91, stdev=22.59 00:35:09.123 clat (msec): min=93, max=348, avg=226.72, stdev=43.85 00:35:09.123 lat (msec): min=93, max=348, avg=226.74, stdev=43.86 00:35:09.123 clat percentiles (msec): 00:35:09.123 | 1.00th=[ 94], 5.00th=[ 144], 10.00th=[ 176], 20.00th=[ 201], 00:35:09.123 | 30.00th=[ 211], 40.00th=[ 220], 50.00th=[ 226], 60.00th=[ 232], 00:35:09.123 | 70.00th=[ 241], 80.00th=[ 264], 90.00th=[ 288], 95.00th=[ 288], 00:35:09.123 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 351], 99.95th=[ 351], 00:35:09.123 | 99.99th=[ 351] 00:35:09.123 bw ( KiB/s): min= 144, max= 384, per=4.53%, avg=276.80, stdev=52.20, samples=20 00:35:09.123 iops : min= 36, max= 96, avg=69.20, stdev=13.05, samples=20 00:35:09.123 lat (msec) : 100=2.26%, 250=72.60%, 500=25.14% 00:35:09.123 cpu : usr=97.89%, sys=1.53%, ctx=17, majf=0, minf=23 00:35:09.123 IO depths : 1=1.7%, 2=4.9%, 4=15.7%, 8=66.7%, 16=11.0%, 32=0.0%, >=64=0.0% 00:35:09.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 complete : 0=0.0%, 4=91.4%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 issued rwts: total=708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.123 filename2: (groupid=0, jobs=1): err= 0: pid=1685639: Sat Jul 13 07:22:36 2024 00:35:09.123 read: IOPS=55, BW=222KiB/s (227kB/s)(2232KiB/10069msec) 00:35:09.123 slat (usec): min=8, max=116, avg=51.54, stdev=27.40 00:35:09.123 clat (msec): min=138, max=475, avg=288.12, stdev=60.88 00:35:09.123 lat (msec): min=138, max=475, avg=288.17, stdev=60.87 00:35:09.123 clat percentiles (msec): 00:35:09.123 | 1.00th=[ 140], 5.00th=[ 180], 10.00th=[ 203], 20.00th=[ 234], 00:35:09.123 | 30.00th=[ 264], 40.00th=[ 288], 50.00th=[ 300], 60.00th=[ 317], 00:35:09.123 | 70.00th=[ 321], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 372], 00:35:09.123 | 99.00th=[ 443], 99.50th=[ 464], 99.90th=[ 477], 99.95th=[ 477], 00:35:09.123 | 99.99th=[ 477] 00:35:09.123 bw ( KiB/s): min= 128, max= 384, per=3.54%, avg=216.80, stdev=70.12, samples=20 00:35:09.123 iops : min= 32, max= 96, avg=54.20, stdev=17.53, samples=20 00:35:09.123 lat (msec) : 250=25.81%, 500=74.19% 00:35:09.123 cpu : usr=97.72%, sys=1.75%, ctx=38, majf=0, minf=27 00:35:09.123 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.0%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:09.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 issued rwts: total=558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.123 filename2: (groupid=0, jobs=1): err= 0: pid=1685640: Sat Jul 13 07:22:36 2024 00:35:09.123 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10071msec) 00:35:09.123 slat (nsec): min=5607, max=91914, avg=31869.40, stdev=12022.62 00:35:09.123 clat (msec): min=125, max=455, avg=287.46, stdev=54.50 00:35:09.123 lat (msec): min=125, max=455, avg=287.49, stdev=54.50 00:35:09.123 clat percentiles (msec): 00:35:09.123 | 1.00th=[ 165], 5.00th=[ 194], 10.00th=[ 203], 20.00th=[ 234], 00:35:09.123 | 30.00th=[ 266], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 309], 00:35:09.123 | 70.00th=[ 313], 80.00th=[ 321], 90.00th=[ 351], 95.00th=[ 372], 00:35:09.123 | 99.00th=[ 443], 99.50th=[ 447], 99.90th=[ 456], 99.95th=[ 456], 00:35:09.123 | 99.99th=[ 456] 00:35:09.123 bw ( KiB/s): min= 128, max= 256, per=3.56%, avg=217.60, stdev=55.28, samples=20 00:35:09.123 iops : min= 32, max= 64, avg=54.40, stdev=13.82, samples=20 00:35:09.123 lat (msec) : 250=22.50%, 500=77.50% 00:35:09.123 cpu : usr=97.58%, sys=1.64%, ctx=31, majf=0, minf=26 00:35:09.123 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:09.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.123 filename2: (groupid=0, jobs=1): err= 0: pid=1685641: Sat Jul 13 07:22:36 2024 00:35:09.123 read: IOPS=54, BW=217KiB/s (222kB/s)(2176KiB/10026msec) 00:35:09.123 slat (nsec): min=8363, max=79052, avg=23027.26, stdev=8244.91 00:35:09.123 clat (msec): min=174, max=446, avg=294.68, stdev=51.79 00:35:09.123 lat (msec): min=174, max=446, avg=294.70, stdev=51.79 00:35:09.123 clat percentiles (msec): 00:35:09.123 | 1.00th=[ 176], 5.00th=[ 205], 10.00th=[ 228], 20.00th=[ 249], 00:35:09.123 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 313], 00:35:09.123 | 70.00th=[ 321], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 380], 00:35:09.123 | 99.00th=[ 422], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 447], 00:35:09.123 | 99.99th=[ 447] 00:35:09.123 bw ( KiB/s): min= 128, max= 384, per=3.46%, avg=211.20, stdev=69.95, samples=20 00:35:09.123 iops : min= 32, max= 96, avg=52.80, stdev=17.49, samples=20 00:35:09.123 lat (msec) : 250=22.06%, 500=77.94% 00:35:09.123 cpu : usr=98.00%, sys=1.46%, ctx=40, majf=0, minf=33 00:35:09.123 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:35:09.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.123 filename2: (groupid=0, jobs=1): err= 0: pid=1685642: Sat Jul 13 07:22:36 2024 00:35:09.123 read: IOPS=63, BW=255KiB/s (261kB/s)(2560KiB/10050msec) 00:35:09.123 slat (usec): min=8, max=116, avg=28.51, stdev=19.26 00:35:09.123 clat (msec): min=170, max=359, avg=250.99, stdev=42.97 00:35:09.123 lat (msec): min=170, max=359, avg=251.02, stdev=42.98 00:35:09.123 clat percentiles (msec): 00:35:09.123 | 1.00th=[ 171], 5.00th=[ 180], 10.00th=[ 203], 20.00th=[ 211], 00:35:09.123 | 30.00th=[ 224], 40.00th=[ 234], 50.00th=[ 239], 60.00th=[ 264], 00:35:09.123 | 70.00th=[ 271], 80.00th=[ 296], 90.00th=[ 317], 95.00th=[ 321], 00:35:09.123 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 359], 99.95th=[ 359], 00:35:09.123 | 99.99th=[ 359] 00:35:09.123 bw ( KiB/s): min= 128, max= 384, per=4.09%, avg=249.60, stdev=48.53, samples=20 00:35:09.123 iops : min= 32, max= 96, avg=62.40, stdev=12.13, samples=20 00:35:09.123 lat (msec) : 250=57.81%, 500=42.19% 00:35:09.123 cpu : usr=97.96%, sys=1.59%, ctx=21, majf=0, minf=26 00:35:09.123 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:35:09.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.123 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.123 00:35:09.123 Run status group 0 (all jobs): 00:35:09.123 READ: bw=6094KiB/s (6241kB/s), 217KiB/s-339KiB/s (222kB/s-347kB/s), io=60.1MiB (63.0MB), run=10026-10101msec 00:35:09.123 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:09.123 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:09.123 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:09.123 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 bdev_null0 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 [2024-07-13 07:22:37.165535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 bdev_null1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:09.124 { 00:35:09.124 "params": { 00:35:09.124 "name": "Nvme$subsystem", 00:35:09.124 "trtype": "$TEST_TRANSPORT", 00:35:09.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:09.124 "adrfam": "ipv4", 00:35:09.124 "trsvcid": "$NVMF_PORT", 00:35:09.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:09.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:09.124 "hdgst": ${hdgst:-false}, 00:35:09.124 "ddgst": ${ddgst:-false} 00:35:09.124 }, 00:35:09.124 "method": "bdev_nvme_attach_controller" 00:35:09.124 } 00:35:09.124 EOF 00:35:09.124 )") 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:09.124 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:09.125 { 00:35:09.125 "params": { 00:35:09.125 "name": "Nvme$subsystem", 00:35:09.125 "trtype": "$TEST_TRANSPORT", 00:35:09.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:09.125 "adrfam": "ipv4", 00:35:09.125 "trsvcid": "$NVMF_PORT", 00:35:09.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:09.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:09.125 "hdgst": ${hdgst:-false}, 00:35:09.125 "ddgst": ${ddgst:-false} 00:35:09.125 }, 00:35:09.125 "method": "bdev_nvme_attach_controller" 00:35:09.125 } 00:35:09.125 EOF 00:35:09.125 )") 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:09.125 "params": { 00:35:09.125 "name": "Nvme0", 00:35:09.125 "trtype": "tcp", 00:35:09.125 "traddr": "10.0.0.2", 00:35:09.125 "adrfam": "ipv4", 00:35:09.125 "trsvcid": "4420", 00:35:09.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:09.125 "hdgst": false, 00:35:09.125 "ddgst": false 00:35:09.125 }, 00:35:09.125 "method": "bdev_nvme_attach_controller" 00:35:09.125 },{ 00:35:09.125 "params": { 00:35:09.125 "name": "Nvme1", 00:35:09.125 "trtype": "tcp", 00:35:09.125 "traddr": "10.0.0.2", 00:35:09.125 "adrfam": "ipv4", 00:35:09.125 "trsvcid": "4420", 00:35:09.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:09.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:09.125 "hdgst": false, 00:35:09.125 "ddgst": false 00:35:09.125 }, 00:35:09.125 "method": "bdev_nvme_attach_controller" 00:35:09.125 }' 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:09.125 07:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:09.125 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:09.125 ... 00:35:09.125 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:09.125 ... 00:35:09.125 fio-3.35 00:35:09.125 Starting 4 threads 00:35:09.125 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.386 00:35:14.386 filename0: (groupid=0, jobs=1): err= 0: pid=1686906: Sat Jul 13 07:22:43 2024 00:35:14.386 read: IOPS=1872, BW=14.6MiB/s (15.3MB/s)(73.2MiB/5003msec) 00:35:14.386 slat (nsec): min=5190, max=70556, avg=16365.82, stdev=8993.73 00:35:14.386 clat (usec): min=723, max=7780, avg=4220.11, stdev=671.89 00:35:14.386 lat (usec): min=741, max=7800, avg=4236.47, stdev=671.48 00:35:14.386 clat percentiles (usec): 00:35:14.386 | 1.00th=[ 2900], 5.00th=[ 3359], 10.00th=[ 3556], 20.00th=[ 3851], 00:35:14.386 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:35:14.386 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5735], 00:35:14.386 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7570], 99.95th=[ 7701], 00:35:14.386 | 99.99th=[ 7767] 00:35:14.386 bw ( KiB/s): min=14704, max=15360, per=24.89%, avg=14975.70, stdev=214.24, samples=10 00:35:14.386 iops : min= 1838, max= 1920, avg=1871.90, stdev=26.78, samples=10 00:35:14.386 lat (usec) : 750=0.01%, 1000=0.05% 00:35:14.386 lat (msec) : 2=0.22%, 4=30.23%, 10=69.49% 00:35:14.386 cpu : usr=95.54%, sys=4.00%, ctx=13, majf=0, minf=56 00:35:14.386 IO depths : 1=0.1%, 2=8.3%, 4=63.7%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.386 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.386 issued rwts: total=9366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:14.386 filename0: (groupid=0, jobs=1): err= 0: pid=1686907: Sat Jul 13 07:22:43 2024 00:35:14.386 read: IOPS=1889, BW=14.8MiB/s (15.5MB/s)(73.8MiB/5002msec) 00:35:14.386 slat (nsec): min=5274, max=71170, avg=16800.84, stdev=9249.40 00:35:14.386 clat (usec): min=823, max=7577, avg=4179.45, stdev=591.94 00:35:14.386 lat (usec): min=849, max=7617, avg=4196.25, stdev=592.02 00:35:14.386 clat percentiles (usec): 00:35:14.386 | 1.00th=[ 2704], 5.00th=[ 3359], 10.00th=[ 3621], 20.00th=[ 3851], 00:35:14.386 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:35:14.386 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5276], 00:35:14.386 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7177], 99.95th=[ 7439], 00:35:14.386 | 99.99th=[ 7570] 00:35:14.386 bw ( KiB/s): min=14589, max=15744, per=25.12%, avg=15110.10, stdev=374.65, samples=10 00:35:14.386 iops : min= 1823, max= 1968, avg=1888.70, stdev=46.93, samples=10 00:35:14.386 lat (usec) : 1000=0.04% 00:35:14.386 lat (msec) : 2=0.26%, 4=29.94%, 10=69.75% 00:35:14.386 cpu : usr=94.88%, sys=4.60%, ctx=16, majf=0, minf=72 00:35:14.386 IO depths : 1=0.2%, 2=7.4%, 4=64.9%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.386 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.386 issued rwts: total=9449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:14.386 filename1: (groupid=0, jobs=1): err= 0: pid=1686908: Sat Jul 13 07:22:43 2024 00:35:14.386 read: IOPS=1902, BW=14.9MiB/s (15.6MB/s)(74.3MiB/5002msec) 00:35:14.386 slat (nsec): min=5260, max=68322, avg=14792.56, stdev=8575.14 00:35:14.386 clat (usec): min=967, max=7563, avg=4156.31, stdev=647.78 00:35:14.386 lat (usec): min=984, max=7591, avg=4171.11, stdev=647.98 00:35:14.386 clat percentiles (usec): 00:35:14.386 | 1.00th=[ 2671], 5.00th=[ 3195], 10.00th=[ 3458], 20.00th=[ 3785], 00:35:14.386 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228], 00:35:14.386 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5473], 00:35:14.386 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7308], 00:35:14.386 | 99.99th=[ 7570] 00:35:14.386 bw ( KiB/s): min=14592, max=15600, per=25.29%, avg=15217.60, stdev=323.93, samples=10 00:35:14.386 iops : min= 1824, max= 1950, avg=1902.20, stdev=40.49, samples=10 00:35:14.386 lat (usec) : 1000=0.02% 00:35:14.386 lat (msec) : 2=0.15%, 4=34.45%, 10=65.38% 00:35:14.386 cpu : usr=94.00%, sys=5.52%, ctx=15, majf=0, minf=64 00:35:14.386 IO depths : 1=0.1%, 2=8.4%, 4=64.1%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.386 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.386 issued rwts: total=9516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:14.386 filename1: (groupid=0, jobs=1): err= 0: pid=1686909: Sat Jul 13 07:22:43 2024 00:35:14.386 read: IOPS=1858, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5001msec) 00:35:14.386 slat (nsec): min=5278, max=68276, avg=15713.45, stdev=8767.53 00:35:14.386 clat (usec): min=739, max=8462, avg=4254.62, stdev=668.74 00:35:14.386 lat (usec): min=754, max=8477, avg=4270.34, stdev=668.32 00:35:14.386 clat percentiles (usec): 00:35:14.386 | 1.00th=[ 2868], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3884], 00:35:14.386 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:35:14.386 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5800], 00:35:14.386 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7308], 99.95th=[ 7373], 00:35:14.386 | 99.99th=[ 8455] 00:35:14.386 bw ( KiB/s): min=14032, max=15296, per=24.66%, avg=14837.33, stdev=355.62, samples=9 00:35:14.386 iops : min= 1754, max= 1912, avg=1854.67, stdev=44.45, samples=9 00:35:14.386 lat (usec) : 750=0.01%, 1000=0.01% 00:35:14.386 lat (msec) : 2=0.22%, 4=27.78%, 10=71.99% 00:35:14.386 cpu : usr=95.26%, sys=4.30%, ctx=9, majf=0, minf=36 00:35:14.386 IO depths : 1=0.1%, 2=8.4%, 4=64.0%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.386 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.386 issued rwts: total=9292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:14.386 00:35:14.386 Run status group 0 (all jobs): 00:35:14.386 READ: bw=58.8MiB/s (61.6MB/s), 14.5MiB/s-14.9MiB/s (15.2MB/s-15.6MB/s), io=294MiB (308MB), run=5001-5003msec 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.386 00:35:14.386 real 0m23.993s 00:35:14.386 user 4m33.308s 00:35:14.386 sys 0m6.935s 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:14.386 07:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 ************************************ 00:35:14.386 END TEST fio_dif_rand_params 00:35:14.386 ************************************ 00:35:14.386 07:22:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:14.386 07:22:43 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:14.386 07:22:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:14.386 07:22:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:14.386 07:22:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 ************************************ 00:35:14.386 START TEST fio_dif_digest 00:35:14.386 ************************************ 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 bdev_null0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.386 [2024-07-13 07:22:43.512834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:14.386 07:22:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:14.386 { 00:35:14.386 "params": { 00:35:14.386 "name": "Nvme$subsystem", 00:35:14.386 "trtype": "$TEST_TRANSPORT", 00:35:14.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.386 "adrfam": "ipv4", 00:35:14.386 "trsvcid": "$NVMF_PORT", 00:35:14.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.387 "hdgst": ${hdgst:-false}, 00:35:14.387 "ddgst": ${ddgst:-false} 00:35:14.387 }, 00:35:14.387 "method": "bdev_nvme_attach_controller" 00:35:14.387 } 00:35:14.387 EOF 00:35:14.387 )") 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:14.387 "params": { 00:35:14.387 "name": "Nvme0", 00:35:14.387 "trtype": "tcp", 00:35:14.387 "traddr": "10.0.0.2", 00:35:14.387 "adrfam": "ipv4", 00:35:14.387 "trsvcid": "4420", 00:35:14.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.387 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.387 "hdgst": true, 00:35:14.387 "ddgst": true 00:35:14.387 }, 00:35:14.387 "method": "bdev_nvme_attach_controller" 00:35:14.387 }' 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:14.387 07:22:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.387 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:14.387 ... 00:35:14.387 fio-3.35 00:35:14.387 Starting 3 threads 00:35:14.387 EAL: No free 2048 kB hugepages reported on node 1 00:35:26.587 00:35:26.587 filename0: (groupid=0, jobs=1): err= 0: pid=1687773: Sat Jul 13 07:22:54 2024 00:35:26.587 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10045msec) 00:35:26.587 slat (nsec): min=5070, max=80639, avg=18238.27, stdev=4575.76 00:35:26.587 clat (usec): min=9013, max=58054, avg=14557.29, stdev=3534.19 00:35:26.587 lat (usec): min=9031, max=58074, avg=14575.52, stdev=3534.19 00:35:26.587 clat percentiles (usec): 00:35:26.587 | 1.00th=[10421], 5.00th=[12387], 10.00th=[12911], 20.00th=[13435], 00:35:26.587 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:35:26.587 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[16188], 00:35:26.587 | 99.00th=[17433], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:35:26.587 | 99.99th=[57934] 00:35:26.587 bw ( KiB/s): min=22784, max=27648, per=33.75%, avg=26396.25, stdev=1186.61, samples=20 00:35:26.587 iops : min= 178, max= 216, avg=206.20, stdev= 9.27, samples=20 00:35:26.587 lat (msec) : 10=0.53%, 20=98.64%, 50=0.19%, 100=0.63% 00:35:26.587 cpu : usr=93.83%, sys=5.18%, ctx=247, majf=0, minf=191 00:35:26.587 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:26.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.587 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.587 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:26.587 filename0: (groupid=0, jobs=1): err= 0: pid=1687774: Sat Jul 13 07:22:54 2024 00:35:26.587 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(250MiB/10044msec) 00:35:26.587 slat (nsec): min=4942, max=93733, avg=16681.94, stdev=4174.65 00:35:26.587 clat (usec): min=8300, max=53042, avg=15001.81, stdev=1906.08 00:35:26.587 lat (usec): min=8314, max=53063, avg=15018.50, stdev=1906.07 00:35:26.587 clat percentiles (usec): 00:35:26.587 | 1.00th=[ 9634], 5.00th=[12125], 10.00th=[13435], 20.00th=[14091], 00:35:26.587 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:35:26.587 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16581], 95.00th=[16909], 00:35:26.587 | 99.00th=[18220], 99.50th=[19006], 99.90th=[24511], 99.95th=[49021], 00:35:26.587 | 99.99th=[53216] 00:35:26.587 bw ( KiB/s): min=24576, max=27904, per=32.74%, avg=25612.80, stdev=824.22, samples=20 00:35:26.587 iops : min= 192, max= 218, avg=200.10, stdev= 6.44, samples=20 00:35:26.587 lat (msec) : 10=1.80%, 20=98.00%, 50=0.15%, 100=0.05% 00:35:26.587 cpu : usr=94.32%, sys=5.01%, ctx=26, majf=0, minf=111 00:35:26.587 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:26.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.587 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.587 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:26.587 filename0: (groupid=0, jobs=1): err= 0: pid=1687775: Sat Jul 13 07:22:54 2024 00:35:26.587 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(259MiB/10046msec) 00:35:26.587 slat (usec): min=4, max=110, avg=17.88, stdev= 5.23 00:35:26.587 clat (usec): min=8762, max=55975, avg=14504.01, stdev=3828.65 00:35:26.587 lat (usec): min=8782, max=55994, avg=14521.89, stdev=3828.71 00:35:26.587 clat percentiles (usec): 00:35:26.587 | 1.00th=[10290], 5.00th=[12256], 10.00th=[12780], 20.00th=[13304], 00:35:26.587 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:35:26.587 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16188], 00:35:26.587 | 99.00th=[17957], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:35:26.587 | 99.99th=[55837] 00:35:26.587 bw ( KiB/s): min=24576, max=28160, per=33.86%, avg=26483.20, stdev=1001.78, samples=20 00:35:26.587 iops : min= 192, max= 220, avg=206.90, stdev= 7.83, samples=20 00:35:26.587 lat (msec) : 10=0.63%, 20=98.41%, 50=0.19%, 100=0.77% 00:35:26.587 cpu : usr=94.30%, sys=5.26%, ctx=23, majf=0, minf=194 00:35:26.587 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:26.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.587 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.587 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:26.587 00:35:26.587 Run status group 0 (all jobs): 00:35:26.587 READ: bw=76.4MiB/s (80.1MB/s), 24.9MiB/s-25.8MiB/s (26.1MB/s-27.0MB/s), io=767MiB (805MB), run=10044-10046msec 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.587 00:35:26.587 real 0m11.165s 00:35:26.587 user 0m29.438s 00:35:26.587 sys 0m1.845s 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:26.587 07:22:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:26.587 ************************************ 00:35:26.587 END TEST fio_dif_digest 00:35:26.587 ************************************ 00:35:26.587 07:22:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:26.587 07:22:54 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:26.587 07:22:54 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:26.587 rmmod nvme_tcp 00:35:26.587 rmmod nvme_fabrics 00:35:26.587 rmmod nvme_keyring 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1681735 ']' 00:35:26.587 07:22:54 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1681735 00:35:26.587 07:22:54 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1681735 ']' 00:35:26.587 07:22:54 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1681735 00:35:26.588 07:22:54 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:35:26.588 07:22:54 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:26.588 07:22:54 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1681735 00:35:26.588 07:22:54 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:26.588 07:22:54 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:26.588 07:22:54 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1681735' 00:35:26.588 killing process with pid 1681735 00:35:26.588 07:22:54 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1681735 00:35:26.588 07:22:54 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1681735 00:35:26.588 07:22:54 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:26.588 07:22:54 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:26.588 Waiting for block devices as requested 00:35:26.588 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:26.846 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:26.846 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:27.104 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:27.104 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:27.104 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:27.104 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:27.363 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:27.363 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:27.363 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:27.363 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:27.623 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:27.623 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:27.623 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:27.623 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:27.882 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:27.882 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:27.882 07:22:57 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:27.882 07:22:57 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:27.882 07:22:57 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:27.882 07:22:57 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:27.882 07:22:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.882 07:22:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:27.882 07:22:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.414 07:22:59 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:30.414 00:35:30.414 real 1m6.372s 00:35:30.414 user 6m30.634s 00:35:30.414 sys 0m17.359s 00:35:30.414 07:22:59 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:30.414 07:22:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.414 ************************************ 00:35:30.414 END TEST nvmf_dif 00:35:30.414 ************************************ 00:35:30.414 07:22:59 -- common/autotest_common.sh@1142 -- # return 0 00:35:30.414 07:22:59 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:30.414 07:22:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:30.414 07:22:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:30.414 07:22:59 -- common/autotest_common.sh@10 -- # set +x 00:35:30.414 ************************************ 00:35:30.414 START TEST nvmf_abort_qd_sizes 00:35:30.414 ************************************ 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:30.414 * Looking for test storage... 00:35:30.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:30.414 07:22:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:32.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:32.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:32.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:32.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:32.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:35:32.317 00:35:32.317 --- 10.0.0.2 ping statistics --- 00:35:32.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.317 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:32.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:35:32.317 00:35:32.317 --- 10.0.0.1 ping statistics --- 00:35:32.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.317 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:32.317 07:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:33.250 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:33.250 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:33.250 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:33.250 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:33.250 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:33.250 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:33.250 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:33.250 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:33.250 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:33.250 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:33.250 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:33.250 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:33.250 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:33.250 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:33.250 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:33.509 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:34.442 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1692563 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1692563 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1692563 ']' 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.442 07:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:34.443 07:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.443 [2024-07-13 07:23:03.814929] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:35:34.443 [2024-07-13 07:23:03.815015] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.443 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.443 [2024-07-13 07:23:03.853092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:34.443 [2024-07-13 07:23:03.881381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:34.700 [2024-07-13 07:23:03.972303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:34.700 [2024-07-13 07:23:03.972379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:34.700 [2024-07-13 07:23:03.972392] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:34.700 [2024-07-13 07:23:03.972403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:34.700 [2024-07-13 07:23:03.972412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:34.700 [2024-07-13 07:23:03.972499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.700 [2024-07-13 07:23:03.972564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:34.700 [2024-07-13 07:23:03.972628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:34.700 [2024-07-13 07:23:03.972630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:34.700 07:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.958 ************************************ 00:35:34.958 START TEST spdk_target_abort 00:35:34.958 ************************************ 00:35:34.958 07:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:35:34.958 07:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:34.958 07:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:34.958 07:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.958 07:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:38.233 spdk_targetn1 00:35:38.233 07:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.233 07:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:38.233 07:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.233 07:23:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:38.233 [2024-07-13 07:23:06.998745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:38.233 [2024-07-13 07:23:07.031027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:38.233 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:38.234 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:38.234 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:38.234 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:38.234 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:38.234 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:38.234 07:23:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:38.234 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.808 Initializing NVMe Controllers 00:35:40.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:40.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:40.808 Initialization complete. Launching workers. 00:35:40.808 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11634, failed: 0 00:35:40.808 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1246, failed to submit 10388 00:35:40.808 success 789, unsuccess 457, failed 0 00:35:40.808 07:23:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:40.808 07:23:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:40.808 EAL: No free 2048 kB hugepages reported on node 1 00:35:45.012 Initializing NVMe Controllers 00:35:45.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:45.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:45.012 Initialization complete. Launching workers. 00:35:45.012 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8402, failed: 0 00:35:45.012 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7164 00:35:45.012 success 342, unsuccess 896, failed 0 00:35:45.012 07:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:45.012 07:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:45.012 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.536 Initializing NVMe Controllers 00:35:47.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:47.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:47.536 Initialization complete. Launching workers. 00:35:47.536 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31671, failed: 0 00:35:47.536 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2734, failed to submit 28937 00:35:47.536 success 512, unsuccess 2222, failed 0 00:35:47.536 07:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:47.536 07:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.536 07:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:47.536 07:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.536 07:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:47.536 07:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.536 07:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1692563 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1692563 ']' 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1692563 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1692563 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1692563' 00:35:48.911 killing process with pid 1692563 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1692563 00:35:48.911 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1692563 00:35:49.170 00:35:49.170 real 0m14.377s 00:35:49.170 user 0m54.530s 00:35:49.170 sys 0m2.647s 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.170 ************************************ 00:35:49.170 END TEST spdk_target_abort 00:35:49.170 ************************************ 00:35:49.170 07:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:35:49.170 07:23:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:49.170 07:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:49.170 07:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:49.170 07:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:49.170 ************************************ 00:35:49.170 START TEST kernel_target_abort 00:35:49.170 ************************************ 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:49.170 07:23:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:50.546 Waiting for block devices as requested 00:35:50.546 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:50.546 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:50.546 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:50.546 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:50.805 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:50.805 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:50.805 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:50.805 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:51.064 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:51.064 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:51.064 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:51.064 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:51.324 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:51.324 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:51.324 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:51.583 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:51.583 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:51.583 07:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:51.583 07:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:51.583 07:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:51.583 07:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:51.583 07:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:51.583 07:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:51.583 07:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:51.583 07:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:51.583 07:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:51.843 No valid GPT data, bailing 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:51.843 00:35:51.843 Discovery Log Number of Records 2, Generation counter 2 00:35:51.843 =====Discovery Log Entry 0====== 00:35:51.843 trtype: tcp 00:35:51.843 adrfam: ipv4 00:35:51.843 subtype: current discovery subsystem 00:35:51.843 treq: not specified, sq flow control disable supported 00:35:51.843 portid: 1 00:35:51.843 trsvcid: 4420 00:35:51.843 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:51.843 traddr: 10.0.0.1 00:35:51.843 eflags: none 00:35:51.843 sectype: none 00:35:51.843 =====Discovery Log Entry 1====== 00:35:51.843 trtype: tcp 00:35:51.843 adrfam: ipv4 00:35:51.843 subtype: nvme subsystem 00:35:51.843 treq: not specified, sq flow control disable supported 00:35:51.843 portid: 1 00:35:51.843 trsvcid: 4420 00:35:51.843 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:51.843 traddr: 10.0.0.1 00:35:51.843 eflags: none 00:35:51.843 sectype: none 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:51.843 07:23:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:51.843 EAL: No free 2048 kB hugepages reported on node 1 00:35:55.123 Initializing NVMe Controllers 00:35:55.123 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:55.123 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:55.123 Initialization complete. Launching workers. 00:35:55.123 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35666, failed: 0 00:35:55.123 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35666, failed to submit 0 00:35:55.123 success 0, unsuccess 35666, failed 0 00:35:55.123 07:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:55.123 07:23:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:55.123 EAL: No free 2048 kB hugepages reported on node 1 00:35:58.399 Initializing NVMe Controllers 00:35:58.399 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:58.399 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:58.399 Initialization complete. Launching workers. 00:35:58.399 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64572, failed: 0 00:35:58.399 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16270, failed to submit 48302 00:35:58.399 success 0, unsuccess 16270, failed 0 00:35:58.399 07:23:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:58.399 07:23:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:58.399 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.717 Initializing NVMe Controllers 00:36:01.717 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:01.717 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:01.717 Initialization complete. Launching workers. 00:36:01.718 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63423, failed: 0 00:36:01.718 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15862, failed to submit 47561 00:36:01.718 success 0, unsuccess 15862, failed 0 00:36:01.718 07:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:01.718 07:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:01.718 07:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:01.718 07:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:01.718 07:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:01.718 07:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:01.718 07:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:01.718 07:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:01.718 07:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:01.718 07:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:02.283 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:02.283 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:02.283 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:02.283 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:02.283 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:02.283 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:02.283 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:02.283 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:02.283 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:02.283 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:02.283 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:02.283 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:02.539 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:02.539 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:02.539 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:02.539 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:03.472 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:03.472 00:36:03.472 real 0m14.205s 00:36:03.472 user 0m5.223s 00:36:03.472 sys 0m3.325s 00:36:03.472 07:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:03.472 07:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:03.472 ************************************ 00:36:03.472 END TEST kernel_target_abort 00:36:03.472 ************************************ 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:03.472 rmmod nvme_tcp 00:36:03.472 rmmod nvme_fabrics 00:36:03.472 rmmod nvme_keyring 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1692563 ']' 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1692563 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1692563 ']' 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1692563 00:36:03.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1692563) - No such process 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1692563 is not found' 00:36:03.472 Process with pid 1692563 is not found 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:03.472 07:23:32 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:04.843 Waiting for block devices as requested 00:36:04.843 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:04.843 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:04.843 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:05.101 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:05.101 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:05.101 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:05.101 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:05.359 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:05.359 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:05.359 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:05.359 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:05.618 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:05.618 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:05.618 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:05.618 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:05.876 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:05.876 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:05.876 07:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:05.876 07:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:05.876 07:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:05.876 07:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:05.876 07:23:35 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.876 07:23:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:05.876 07:23:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.407 07:23:37 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:08.407 00:36:08.407 real 0m37.934s 00:36:08.407 user 1m1.836s 00:36:08.407 sys 0m9.316s 00:36:08.407 07:23:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:08.407 07:23:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:08.407 ************************************ 00:36:08.407 END TEST nvmf_abort_qd_sizes 00:36:08.407 ************************************ 00:36:08.407 07:23:37 -- common/autotest_common.sh@1142 -- # return 0 00:36:08.407 07:23:37 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:08.407 07:23:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:08.407 07:23:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:08.407 07:23:37 -- common/autotest_common.sh@10 -- # set +x 00:36:08.407 ************************************ 00:36:08.407 START TEST keyring_file 00:36:08.407 ************************************ 00:36:08.407 07:23:37 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:08.407 * Looking for test storage... 00:36:08.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:08.408 07:23:37 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:08.408 07:23:37 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:08.408 07:23:37 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:08.408 07:23:37 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.408 07:23:37 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.408 07:23:37 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.408 07:23:37 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:08.408 07:23:37 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EfsVbHCSoA 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EfsVbHCSoA 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EfsVbHCSoA 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.EfsVbHCSoA 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4VWCbgJkvG 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:08.408 07:23:37 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4VWCbgJkvG 00:36:08.408 07:23:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4VWCbgJkvG 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.4VWCbgJkvG 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@30 -- # tgtpid=1698330 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:08.408 07:23:37 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1698330 00:36:08.408 07:23:37 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1698330 ']' 00:36:08.408 07:23:37 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.408 07:23:37 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:08.408 07:23:37 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.408 07:23:37 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:08.408 07:23:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:08.408 [2024-07-13 07:23:37.546357] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:36:08.408 [2024-07-13 07:23:37.546458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698330 ] 00:36:08.408 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.408 [2024-07-13 07:23:37.581611] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:08.408 [2024-07-13 07:23:37.608422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.408 [2024-07-13 07:23:37.692746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:08.668 07:23:37 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:08.668 [2024-07-13 07:23:37.950667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:08.668 null0 00:36:08.668 [2024-07-13 07:23:37.982712] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:08.668 [2024-07-13 07:23:37.983218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:08.668 [2024-07-13 07:23:37.990725] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.668 07:23:37 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.668 07:23:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:08.668 [2024-07-13 07:23:38.002746] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:08.668 request: 00:36:08.668 { 00:36:08.668 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:08.668 "secure_channel": false, 00:36:08.668 "listen_address": { 00:36:08.668 "trtype": "tcp", 00:36:08.668 "traddr": "127.0.0.1", 00:36:08.668 "trsvcid": "4420" 00:36:08.668 }, 00:36:08.668 "method": "nvmf_subsystem_add_listener", 00:36:08.668 "req_id": 1 00:36:08.668 } 00:36:08.668 Got JSON-RPC error response 00:36:08.668 response: 00:36:08.668 { 00:36:08.668 "code": -32602, 00:36:08.668 "message": "Invalid parameters" 00:36:08.668 } 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:08.668 07:23:38 keyring_file -- keyring/file.sh@46 -- # bperfpid=1698340 00:36:08.668 07:23:38 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:08.668 07:23:38 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1698340 /var/tmp/bperf.sock 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1698340 ']' 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:08.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:08.668 07:23:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:08.668 [2024-07-13 07:23:38.050367] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:36:08.668 [2024-07-13 07:23:38.050432] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698340 ] 00:36:08.668 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.668 [2024-07-13 07:23:38.081573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:08.668 [2024-07-13 07:23:38.111508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.926 [2024-07-13 07:23:38.203358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:08.926 07:23:38 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:08.926 07:23:38 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:08.926 07:23:38 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EfsVbHCSoA 00:36:08.926 07:23:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EfsVbHCSoA 00:36:09.184 07:23:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.4VWCbgJkvG 00:36:09.184 07:23:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.4VWCbgJkvG 00:36:09.442 07:23:38 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:09.442 07:23:38 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:09.442 07:23:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:09.442 07:23:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.442 07:23:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:09.699 07:23:39 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.EfsVbHCSoA == \/\t\m\p\/\t\m\p\.\E\f\s\V\b\H\C\S\o\A ]] 00:36:09.699 07:23:39 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:09.699 07:23:39 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:09.699 07:23:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:09.699 07:23:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.700 07:23:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:09.957 07:23:39 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.4VWCbgJkvG == \/\t\m\p\/\t\m\p\.\4\V\W\C\b\g\J\k\v\G ]] 00:36:09.957 07:23:39 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:09.957 07:23:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:09.957 07:23:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:09.957 07:23:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:09.957 07:23:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.957 07:23:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:10.214 07:23:39 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:10.214 07:23:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:10.214 07:23:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:10.214 07:23:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:10.214 07:23:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.214 07:23:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.214 07:23:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:10.472 07:23:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:10.472 07:23:39 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:10.472 07:23:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:10.729 [2024-07-13 07:23:40.053367] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:10.729 nvme0n1 00:36:10.729 07:23:40 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:10.729 07:23:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:10.729 07:23:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:10.729 07:23:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.729 07:23:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.729 07:23:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:10.987 07:23:40 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:10.987 07:23:40 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:10.987 07:23:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:10.987 07:23:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:10.987 07:23:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.987 07:23:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.987 07:23:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:11.244 07:23:40 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:11.244 07:23:40 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:11.502 Running I/O for 1 seconds... 00:36:12.435 00:36:12.435 Latency(us) 00:36:12.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.435 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:12.435 nvme0n1 : 1.02 4684.07 18.30 0.00 0.00 26994.70 7378.87 31068.92 00:36:12.435 =================================================================================================================== 00:36:12.435 Total : 4684.07 18.30 0.00 0.00 26994.70 7378.87 31068.92 00:36:12.435 0 00:36:12.435 07:23:41 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:12.435 07:23:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:12.693 07:23:42 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:12.693 07:23:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:12.693 07:23:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.693 07:23:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.693 07:23:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.693 07:23:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:12.950 07:23:42 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:12.950 07:23:42 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:12.950 07:23:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:12.950 07:23:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.950 07:23:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.950 07:23:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:12.950 07:23:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.206 07:23:42 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:13.206 07:23:42 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:13.206 07:23:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:13.206 07:23:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:13.206 07:23:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:13.206 07:23:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:13.206 07:23:42 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:13.206 07:23:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:13.206 07:23:42 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:13.206 07:23:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:13.466 [2024-07-13 07:23:42.769642] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:13.466 [2024-07-13 07:23:42.770416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16197b0 (107): Transport endpoint is not connected 00:36:13.466 [2024-07-13 07:23:42.771411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16197b0 (9): Bad file descriptor 00:36:13.466 [2024-07-13 07:23:42.772409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:13.466 [2024-07-13 07:23:42.772434] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:13.466 [2024-07-13 07:23:42.772449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:13.466 request: 00:36:13.466 { 00:36:13.466 "name": "nvme0", 00:36:13.466 "trtype": "tcp", 00:36:13.466 "traddr": "127.0.0.1", 00:36:13.466 "adrfam": "ipv4", 00:36:13.466 "trsvcid": "4420", 00:36:13.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:13.466 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:13.466 "prchk_reftag": false, 00:36:13.466 "prchk_guard": false, 00:36:13.466 "hdgst": false, 00:36:13.466 "ddgst": false, 00:36:13.466 "psk": "key1", 00:36:13.466 "method": "bdev_nvme_attach_controller", 00:36:13.466 "req_id": 1 00:36:13.466 } 00:36:13.466 Got JSON-RPC error response 00:36:13.466 response: 00:36:13.466 { 00:36:13.466 "code": -5, 00:36:13.466 "message": "Input/output error" 00:36:13.466 } 00:36:13.466 07:23:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:13.466 07:23:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:13.466 07:23:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:13.466 07:23:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:13.466 07:23:42 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:13.466 07:23:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:13.466 07:23:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:13.466 07:23:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.466 07:23:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:13.466 07:23:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.727 07:23:43 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:13.727 07:23:43 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:13.727 07:23:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:13.727 07:23:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:13.727 07:23:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.727 07:23:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:13.727 07:23:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.984 07:23:43 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:13.984 07:23:43 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:13.984 07:23:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:14.241 07:23:43 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:14.241 07:23:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:14.497 07:23:43 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:14.498 07:23:43 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:14.498 07:23:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.755 07:23:44 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:14.755 07:23:44 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.EfsVbHCSoA 00:36:14.755 07:23:44 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.EfsVbHCSoA 00:36:14.755 07:23:44 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:14.755 07:23:44 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.EfsVbHCSoA 00:36:14.755 07:23:44 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:14.755 07:23:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:14.755 07:23:44 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:14.755 07:23:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:14.755 07:23:44 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EfsVbHCSoA 00:36:14.755 07:23:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EfsVbHCSoA 00:36:15.012 [2024-07-13 07:23:44.266407] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EfsVbHCSoA': 0100660 00:36:15.012 [2024-07-13 07:23:44.266444] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:15.012 request: 00:36:15.012 { 00:36:15.012 "name": "key0", 00:36:15.012 "path": "/tmp/tmp.EfsVbHCSoA", 00:36:15.012 "method": "keyring_file_add_key", 00:36:15.012 "req_id": 1 00:36:15.012 } 00:36:15.012 Got JSON-RPC error response 00:36:15.012 response: 00:36:15.012 { 00:36:15.012 "code": -1, 00:36:15.012 "message": "Operation not permitted" 00:36:15.012 } 00:36:15.012 07:23:44 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:15.012 07:23:44 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:15.012 07:23:44 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:15.012 07:23:44 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:15.012 07:23:44 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.EfsVbHCSoA 00:36:15.012 07:23:44 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EfsVbHCSoA 00:36:15.012 07:23:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EfsVbHCSoA 00:36:15.269 07:23:44 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.EfsVbHCSoA 00:36:15.269 07:23:44 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:15.269 07:23:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:15.269 07:23:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.269 07:23:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.269 07:23:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.269 07:23:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:15.527 07:23:44 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:15.527 07:23:44 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:15.527 07:23:44 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:15.527 07:23:44 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:15.527 07:23:44 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:15.527 07:23:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.527 07:23:44 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:15.527 07:23:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.527 07:23:44 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:15.527 07:23:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:15.786 [2024-07-13 07:23:45.012449] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.EfsVbHCSoA': No such file or directory 00:36:15.786 [2024-07-13 07:23:45.012490] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:15.786 [2024-07-13 07:23:45.012539] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:15.786 [2024-07-13 07:23:45.012554] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:15.786 [2024-07-13 07:23:45.012567] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:15.786 request: 00:36:15.786 { 00:36:15.786 "name": "nvme0", 00:36:15.786 "trtype": "tcp", 00:36:15.786 "traddr": "127.0.0.1", 00:36:15.786 "adrfam": "ipv4", 00:36:15.786 "trsvcid": "4420", 00:36:15.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.786 "prchk_reftag": false, 00:36:15.786 "prchk_guard": false, 00:36:15.786 "hdgst": false, 00:36:15.786 "ddgst": false, 00:36:15.786 "psk": "key0", 00:36:15.786 "method": "bdev_nvme_attach_controller", 00:36:15.786 "req_id": 1 00:36:15.786 } 00:36:15.786 Got JSON-RPC error response 00:36:15.786 response: 00:36:15.786 { 00:36:15.786 "code": -19, 00:36:15.786 "message": "No such device" 00:36:15.786 } 00:36:15.786 07:23:45 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:15.786 07:23:45 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:15.786 07:23:45 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:15.786 07:23:45 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:15.786 07:23:45 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:15.786 07:23:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:16.045 07:23:45 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:16.045 07:23:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:16.045 07:23:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:16.045 07:23:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:16.045 07:23:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:16.045 07:23:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:16.045 07:23:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0uP4KCLV06 00:36:16.045 07:23:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:16.045 07:23:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:16.045 07:23:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:16.045 07:23:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:16.045 07:23:45 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:16.045 07:23:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:16.045 07:23:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:16.045 07:23:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0uP4KCLV06 00:36:16.045 07:23:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0uP4KCLV06 00:36:16.045 07:23:45 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.0uP4KCLV06 00:36:16.045 07:23:45 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0uP4KCLV06 00:36:16.045 07:23:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0uP4KCLV06 00:36:16.303 07:23:45 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:16.303 07:23:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:16.560 nvme0n1 00:36:16.560 07:23:45 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:16.560 07:23:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:16.560 07:23:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.560 07:23:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.560 07:23:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.560 07:23:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:16.819 07:23:46 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:16.819 07:23:46 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:16.819 07:23:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:17.077 07:23:46 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:17.077 07:23:46 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:17.077 07:23:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.077 07:23:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.077 07:23:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:17.335 07:23:46 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:17.335 07:23:46 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:17.335 07:23:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:17.335 07:23:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.335 07:23:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.335 07:23:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:17.335 07:23:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.593 07:23:46 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:17.593 07:23:46 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:17.593 07:23:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:17.851 07:23:47 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:17.851 07:23:47 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:17.851 07:23:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.109 07:23:47 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:18.109 07:23:47 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0uP4KCLV06 00:36:18.109 07:23:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0uP4KCLV06 00:36:18.367 07:23:47 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.4VWCbgJkvG 00:36:18.367 07:23:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.4VWCbgJkvG 00:36:18.624 07:23:47 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.624 07:23:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.882 nvme0n1 00:36:18.882 07:23:48 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:18.882 07:23:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:19.142 07:23:48 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:19.142 "subsystems": [ 00:36:19.142 { 00:36:19.142 "subsystem": "keyring", 00:36:19.142 "config": [ 00:36:19.142 { 00:36:19.142 "method": "keyring_file_add_key", 00:36:19.142 "params": { 00:36:19.142 "name": "key0", 00:36:19.142 "path": "/tmp/tmp.0uP4KCLV06" 00:36:19.142 } 00:36:19.142 }, 00:36:19.142 { 00:36:19.142 "method": "keyring_file_add_key", 00:36:19.142 "params": { 00:36:19.142 "name": "key1", 00:36:19.142 "path": "/tmp/tmp.4VWCbgJkvG" 00:36:19.142 } 00:36:19.142 } 00:36:19.142 ] 00:36:19.142 }, 00:36:19.142 { 00:36:19.142 "subsystem": "iobuf", 00:36:19.142 "config": [ 00:36:19.142 { 00:36:19.142 "method": "iobuf_set_options", 00:36:19.142 "params": { 00:36:19.142 "small_pool_count": 8192, 00:36:19.142 "large_pool_count": 1024, 00:36:19.142 "small_bufsize": 8192, 00:36:19.142 "large_bufsize": 135168 00:36:19.142 } 00:36:19.142 } 00:36:19.142 ] 00:36:19.142 }, 00:36:19.142 { 00:36:19.142 "subsystem": "sock", 00:36:19.142 "config": [ 00:36:19.142 { 00:36:19.142 "method": "sock_set_default_impl", 00:36:19.142 "params": { 00:36:19.142 "impl_name": "posix" 00:36:19.142 } 00:36:19.142 }, 00:36:19.142 { 00:36:19.142 "method": "sock_impl_set_options", 00:36:19.142 "params": { 00:36:19.142 "impl_name": "ssl", 00:36:19.142 "recv_buf_size": 4096, 00:36:19.142 "send_buf_size": 4096, 00:36:19.142 "enable_recv_pipe": true, 00:36:19.142 "enable_quickack": false, 00:36:19.142 "enable_placement_id": 0, 00:36:19.142 "enable_zerocopy_send_server": true, 00:36:19.142 "enable_zerocopy_send_client": false, 00:36:19.142 "zerocopy_threshold": 0, 00:36:19.142 "tls_version": 0, 00:36:19.143 "enable_ktls": false 00:36:19.143 } 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "method": "sock_impl_set_options", 00:36:19.143 "params": { 00:36:19.143 "impl_name": "posix", 00:36:19.143 "recv_buf_size": 2097152, 00:36:19.143 "send_buf_size": 2097152, 00:36:19.143 "enable_recv_pipe": true, 00:36:19.143 "enable_quickack": false, 00:36:19.143 "enable_placement_id": 0, 00:36:19.143 "enable_zerocopy_send_server": true, 00:36:19.143 "enable_zerocopy_send_client": false, 00:36:19.143 "zerocopy_threshold": 0, 00:36:19.143 "tls_version": 0, 00:36:19.143 "enable_ktls": false 00:36:19.143 } 00:36:19.143 } 00:36:19.143 ] 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "subsystem": "vmd", 00:36:19.143 "config": [] 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "subsystem": "accel", 00:36:19.143 "config": [ 00:36:19.143 { 00:36:19.143 "method": "accel_set_options", 00:36:19.143 "params": { 00:36:19.143 "small_cache_size": 128, 00:36:19.143 "large_cache_size": 16, 00:36:19.143 "task_count": 2048, 00:36:19.143 "sequence_count": 2048, 00:36:19.143 "buf_count": 2048 00:36:19.143 } 00:36:19.143 } 00:36:19.143 ] 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "subsystem": "bdev", 00:36:19.143 "config": [ 00:36:19.143 { 00:36:19.143 "method": "bdev_set_options", 00:36:19.143 "params": { 00:36:19.143 "bdev_io_pool_size": 65535, 00:36:19.143 "bdev_io_cache_size": 256, 00:36:19.143 "bdev_auto_examine": true, 00:36:19.143 "iobuf_small_cache_size": 128, 00:36:19.143 "iobuf_large_cache_size": 16 00:36:19.143 } 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "method": "bdev_raid_set_options", 00:36:19.143 "params": { 00:36:19.143 "process_window_size_kb": 1024 00:36:19.143 } 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "method": "bdev_iscsi_set_options", 00:36:19.143 "params": { 00:36:19.143 "timeout_sec": 30 00:36:19.143 } 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "method": "bdev_nvme_set_options", 00:36:19.143 "params": { 00:36:19.143 "action_on_timeout": "none", 00:36:19.143 "timeout_us": 0, 00:36:19.143 "timeout_admin_us": 0, 00:36:19.143 "keep_alive_timeout_ms": 10000, 00:36:19.143 "arbitration_burst": 0, 00:36:19.143 "low_priority_weight": 0, 00:36:19.143 "medium_priority_weight": 0, 00:36:19.143 "high_priority_weight": 0, 00:36:19.143 "nvme_adminq_poll_period_us": 10000, 00:36:19.143 "nvme_ioq_poll_period_us": 0, 00:36:19.143 "io_queue_requests": 512, 00:36:19.143 "delay_cmd_submit": true, 00:36:19.143 "transport_retry_count": 4, 00:36:19.143 "bdev_retry_count": 3, 00:36:19.143 "transport_ack_timeout": 0, 00:36:19.143 "ctrlr_loss_timeout_sec": 0, 00:36:19.143 "reconnect_delay_sec": 0, 00:36:19.143 "fast_io_fail_timeout_sec": 0, 00:36:19.143 "disable_auto_failback": false, 00:36:19.143 "generate_uuids": false, 00:36:19.143 "transport_tos": 0, 00:36:19.143 "nvme_error_stat": false, 00:36:19.143 "rdma_srq_size": 0, 00:36:19.143 "io_path_stat": false, 00:36:19.143 "allow_accel_sequence": false, 00:36:19.143 "rdma_max_cq_size": 0, 00:36:19.143 "rdma_cm_event_timeout_ms": 0, 00:36:19.143 "dhchap_digests": [ 00:36:19.143 "sha256", 00:36:19.143 "sha384", 00:36:19.143 "sha512" 00:36:19.143 ], 00:36:19.143 "dhchap_dhgroups": [ 00:36:19.143 "null", 00:36:19.143 "ffdhe2048", 00:36:19.143 "ffdhe3072", 00:36:19.143 "ffdhe4096", 00:36:19.143 "ffdhe6144", 00:36:19.143 "ffdhe8192" 00:36:19.143 ] 00:36:19.143 } 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "method": "bdev_nvme_attach_controller", 00:36:19.143 "params": { 00:36:19.143 "name": "nvme0", 00:36:19.143 "trtype": "TCP", 00:36:19.143 "adrfam": "IPv4", 00:36:19.143 "traddr": "127.0.0.1", 00:36:19.143 "trsvcid": "4420", 00:36:19.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:19.143 "prchk_reftag": false, 00:36:19.143 "prchk_guard": false, 00:36:19.143 "ctrlr_loss_timeout_sec": 0, 00:36:19.143 "reconnect_delay_sec": 0, 00:36:19.143 "fast_io_fail_timeout_sec": 0, 00:36:19.143 "psk": "key0", 00:36:19.143 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:19.143 "hdgst": false, 00:36:19.143 "ddgst": false 00:36:19.143 } 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "method": "bdev_nvme_set_hotplug", 00:36:19.143 "params": { 00:36:19.143 "period_us": 100000, 00:36:19.143 "enable": false 00:36:19.143 } 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "method": "bdev_wait_for_examine" 00:36:19.143 } 00:36:19.143 ] 00:36:19.143 }, 00:36:19.143 { 00:36:19.143 "subsystem": "nbd", 00:36:19.143 "config": [] 00:36:19.143 } 00:36:19.143 ] 00:36:19.143 }' 00:36:19.143 07:23:48 keyring_file -- keyring/file.sh@114 -- # killprocess 1698340 00:36:19.143 07:23:48 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1698340 ']' 00:36:19.143 07:23:48 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1698340 00:36:19.143 07:23:48 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:19.143 07:23:48 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:19.143 07:23:48 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1698340 00:36:19.143 07:23:48 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:19.143 07:23:48 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:19.143 07:23:48 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1698340' 00:36:19.143 killing process with pid 1698340 00:36:19.143 07:23:48 keyring_file -- common/autotest_common.sh@967 -- # kill 1698340 00:36:19.143 Received shutdown signal, test time was about 1.000000 seconds 00:36:19.143 00:36:19.143 Latency(us) 00:36:19.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.143 =================================================================================================================== 00:36:19.143 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:19.143 07:23:48 keyring_file -- common/autotest_common.sh@972 -- # wait 1698340 00:36:19.401 07:23:48 keyring_file -- keyring/file.sh@117 -- # bperfpid=1699794 00:36:19.401 07:23:48 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1699794 /var/tmp/bperf.sock 00:36:19.401 07:23:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1699794 ']' 00:36:19.401 07:23:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:19.401 07:23:48 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:19.401 07:23:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:19.401 07:23:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:19.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:19.401 07:23:48 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:19.401 "subsystems": [ 00:36:19.401 { 00:36:19.401 "subsystem": "keyring", 00:36:19.401 "config": [ 00:36:19.401 { 00:36:19.401 "method": "keyring_file_add_key", 00:36:19.401 "params": { 00:36:19.401 "name": "key0", 00:36:19.401 "path": "/tmp/tmp.0uP4KCLV06" 00:36:19.401 } 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "method": "keyring_file_add_key", 00:36:19.401 "params": { 00:36:19.401 "name": "key1", 00:36:19.401 "path": "/tmp/tmp.4VWCbgJkvG" 00:36:19.401 } 00:36:19.401 } 00:36:19.401 ] 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "subsystem": "iobuf", 00:36:19.401 "config": [ 00:36:19.401 { 00:36:19.401 "method": "iobuf_set_options", 00:36:19.401 "params": { 00:36:19.401 "small_pool_count": 8192, 00:36:19.401 "large_pool_count": 1024, 00:36:19.401 "small_bufsize": 8192, 00:36:19.401 "large_bufsize": 135168 00:36:19.401 } 00:36:19.401 } 00:36:19.401 ] 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "subsystem": "sock", 00:36:19.401 "config": [ 00:36:19.401 { 00:36:19.401 "method": "sock_set_default_impl", 00:36:19.401 "params": { 00:36:19.401 "impl_name": "posix" 00:36:19.401 } 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "method": "sock_impl_set_options", 00:36:19.401 "params": { 00:36:19.401 "impl_name": "ssl", 00:36:19.401 "recv_buf_size": 4096, 00:36:19.401 "send_buf_size": 4096, 00:36:19.401 "enable_recv_pipe": true, 00:36:19.401 "enable_quickack": false, 00:36:19.401 "enable_placement_id": 0, 00:36:19.401 "enable_zerocopy_send_server": true, 00:36:19.401 "enable_zerocopy_send_client": false, 00:36:19.401 "zerocopy_threshold": 0, 00:36:19.401 "tls_version": 0, 00:36:19.401 "enable_ktls": false 00:36:19.401 } 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "method": "sock_impl_set_options", 00:36:19.401 "params": { 00:36:19.401 "impl_name": "posix", 00:36:19.401 "recv_buf_size": 2097152, 00:36:19.401 "send_buf_size": 2097152, 00:36:19.401 "enable_recv_pipe": true, 00:36:19.401 "enable_quickack": false, 00:36:19.401 "enable_placement_id": 0, 00:36:19.401 "enable_zerocopy_send_server": true, 00:36:19.401 "enable_zerocopy_send_client": false, 00:36:19.401 "zerocopy_threshold": 0, 00:36:19.401 "tls_version": 0, 00:36:19.401 "enable_ktls": false 00:36:19.401 } 00:36:19.401 } 00:36:19.401 ] 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "subsystem": "vmd", 00:36:19.401 "config": [] 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "subsystem": "accel", 00:36:19.401 "config": [ 00:36:19.401 { 00:36:19.401 "method": "accel_set_options", 00:36:19.401 "params": { 00:36:19.401 "small_cache_size": 128, 00:36:19.401 "large_cache_size": 16, 00:36:19.401 "task_count": 2048, 00:36:19.401 "sequence_count": 2048, 00:36:19.401 "buf_count": 2048 00:36:19.401 } 00:36:19.401 } 00:36:19.401 ] 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "subsystem": "bdev", 00:36:19.401 "config": [ 00:36:19.401 { 00:36:19.401 "method": "bdev_set_options", 00:36:19.401 "params": { 00:36:19.401 "bdev_io_pool_size": 65535, 00:36:19.401 "bdev_io_cache_size": 256, 00:36:19.401 "bdev_auto_examine": true, 00:36:19.401 "iobuf_small_cache_size": 128, 00:36:19.401 "iobuf_large_cache_size": 16 00:36:19.401 } 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "method": "bdev_raid_set_options", 00:36:19.401 "params": { 00:36:19.401 "process_window_size_kb": 1024 00:36:19.401 } 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "method": "bdev_iscsi_set_options", 00:36:19.401 "params": { 00:36:19.401 "timeout_sec": 30 00:36:19.401 } 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "method": "bdev_nvme_set_options", 00:36:19.401 "params": { 00:36:19.401 "action_on_timeout": "none", 00:36:19.401 "timeout_us": 0, 00:36:19.401 "timeout_admin_us": 0, 00:36:19.401 "keep_alive_timeout_ms": 10000, 00:36:19.401 "arbitration_burst": 0, 00:36:19.401 "low_priority_weight": 0, 00:36:19.401 "medium_priority_weight": 0, 00:36:19.401 "high_priority_weight": 0, 00:36:19.401 "nvme_adminq_poll_period_us": 10000, 00:36:19.401 "nvme_ioq_poll_period_us": 0, 00:36:19.401 "io_queue_requests": 512, 00:36:19.401 "delay_cmd_submit": true, 00:36:19.401 "transport_retry_count": 4, 00:36:19.401 "bdev_retry_count": 3, 00:36:19.401 "transport_ack_timeout": 0, 00:36:19.401 "ctrlr_loss_timeout_sec": 0, 00:36:19.401 "reconnect_delay_sec": 0, 00:36:19.401 "fast_io_fail_timeout_sec": 0, 00:36:19.401 "disable_auto_failback": false, 00:36:19.401 "generate_uuids": false, 00:36:19.401 "transport_tos": 0, 00:36:19.401 "nvme_error_stat": false, 00:36:19.401 "rdma_srq_size": 0, 00:36:19.401 "io_path_stat": false, 00:36:19.401 "allow_accel_sequence": false, 00:36:19.401 "rdma_max_cq_size": 0, 00:36:19.401 "rdma_cm_event_timeout_ms": 0, 00:36:19.401 "dhchap_digests": [ 00:36:19.401 "sha256", 00:36:19.401 "sha384", 00:36:19.401 "sha512" 00:36:19.401 ], 00:36:19.401 "dhchap_dhgroups": [ 00:36:19.401 "null", 00:36:19.401 "ffdhe2048", 00:36:19.401 "ffdhe3072", 00:36:19.401 "ffdhe4096", 00:36:19.401 "ffdhe6144", 00:36:19.401 "ffdhe8192" 00:36:19.401 ] 00:36:19.401 } 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "method": "bdev_nvme_attach_controller", 00:36:19.401 "params": { 00:36:19.401 "name": "nvme0", 00:36:19.401 "trtype": "TCP", 00:36:19.401 "adrfam": "IPv4", 00:36:19.401 "traddr": "127.0.0.1", 00:36:19.401 "trsvcid": "4420", 00:36:19.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:19.401 "prchk_reftag": false, 00:36:19.401 "prchk_guard": false, 00:36:19.401 "ctrlr_loss_timeout_sec": 0, 00:36:19.401 "reconnect_delay_sec": 0, 00:36:19.401 "fast_io_fail_timeout_sec": 0, 00:36:19.401 "psk": "key0", 00:36:19.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:19.401 "hdgst": false, 00:36:19.401 "ddgst": false 00:36:19.401 } 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "method": "bdev_nvme_set_hotplug", 00:36:19.401 "params": { 00:36:19.401 "period_us": 100000, 00:36:19.401 "enable": false 00:36:19.401 } 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "method": "bdev_wait_for_examine" 00:36:19.401 } 00:36:19.401 ] 00:36:19.401 }, 00:36:19.401 { 00:36:19.401 "subsystem": "nbd", 00:36:19.401 "config": [] 00:36:19.401 } 00:36:19.401 ] 00:36:19.401 }' 00:36:19.401 07:23:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:19.401 07:23:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:19.401 [2024-07-13 07:23:48.783401] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:36:19.401 [2024-07-13 07:23:48.783497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699794 ] 00:36:19.401 EAL: No free 2048 kB hugepages reported on node 1 00:36:19.401 [2024-07-13 07:23:48.814574] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:19.401 [2024-07-13 07:23:48.845784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.658 [2024-07-13 07:23:48.934510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.914 [2024-07-13 07:23:49.125086] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:20.479 07:23:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:20.479 07:23:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:20.479 07:23:49 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:20.479 07:23:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.479 07:23:49 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:20.737 07:23:49 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:20.737 07:23:49 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:20.737 07:23:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:20.737 07:23:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.737 07:23:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.737 07:23:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.737 07:23:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:20.994 07:23:50 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:20.994 07:23:50 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:20.994 07:23:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:20.994 07:23:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.994 07:23:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.994 07:23:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.994 07:23:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:21.252 07:23:50 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:21.252 07:23:50 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:21.252 07:23:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:21.252 07:23:50 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:21.510 07:23:50 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:21.510 07:23:50 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:21.510 07:23:50 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.0uP4KCLV06 /tmp/tmp.4VWCbgJkvG 00:36:21.510 07:23:50 keyring_file -- keyring/file.sh@20 -- # killprocess 1699794 00:36:21.510 07:23:50 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1699794 ']' 00:36:21.510 07:23:50 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1699794 00:36:21.510 07:23:50 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:21.510 07:23:50 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:21.510 07:23:50 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1699794 00:36:21.510 07:23:50 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:21.510 07:23:50 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:21.510 07:23:50 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1699794' 00:36:21.510 killing process with pid 1699794 00:36:21.510 07:23:50 keyring_file -- common/autotest_common.sh@967 -- # kill 1699794 00:36:21.510 Received shutdown signal, test time was about 1.000000 seconds 00:36:21.510 00:36:21.510 Latency(us) 00:36:21.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.510 =================================================================================================================== 00:36:21.510 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:21.510 07:23:50 keyring_file -- common/autotest_common.sh@972 -- # wait 1699794 00:36:21.767 07:23:50 keyring_file -- keyring/file.sh@21 -- # killprocess 1698330 00:36:21.767 07:23:50 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1698330 ']' 00:36:21.767 07:23:50 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1698330 00:36:21.767 07:23:50 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:21.767 07:23:50 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:21.767 07:23:50 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1698330 00:36:21.767 07:23:51 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:21.767 07:23:51 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:21.767 07:23:51 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1698330' 00:36:21.767 killing process with pid 1698330 00:36:21.767 07:23:51 keyring_file -- common/autotest_common.sh@967 -- # kill 1698330 00:36:21.767 [2024-07-13 07:23:51.013056] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:21.767 07:23:51 keyring_file -- common/autotest_common.sh@972 -- # wait 1698330 00:36:22.025 00:36:22.025 real 0m14.086s 00:36:22.025 user 0m34.943s 00:36:22.025 sys 0m3.254s 00:36:22.025 07:23:51 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:22.026 07:23:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:22.026 ************************************ 00:36:22.026 END TEST keyring_file 00:36:22.026 ************************************ 00:36:22.026 07:23:51 -- common/autotest_common.sh@1142 -- # return 0 00:36:22.026 07:23:51 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:22.026 07:23:51 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:22.026 07:23:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:22.026 07:23:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:22.026 07:23:51 -- common/autotest_common.sh@10 -- # set +x 00:36:22.284 ************************************ 00:36:22.284 START TEST keyring_linux 00:36:22.284 ************************************ 00:36:22.284 07:23:51 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:22.284 * Looking for test storage... 00:36:22.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.284 07:23:51 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.284 07:23:51 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.284 07:23:51 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.284 07:23:51 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.284 07:23:51 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.284 07:23:51 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.284 07:23:51 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:22.284 07:23:51 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:22.284 /tmp/:spdk-test:key0 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:22.284 07:23:51 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:22.284 07:23:51 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:22.284 /tmp/:spdk-test:key1 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1700152 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:22.284 07:23:51 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1700152 00:36:22.284 07:23:51 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1700152 ']' 00:36:22.284 07:23:51 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.284 07:23:51 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:22.284 07:23:51 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.284 07:23:51 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:22.284 07:23:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:22.284 [2024-07-13 07:23:51.692305] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:36:22.284 [2024-07-13 07:23:51.692385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1700152 ] 00:36:22.284 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.284 [2024-07-13 07:23:51.725588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:22.541 [2024-07-13 07:23:51.756606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.541 [2024-07-13 07:23:51.853625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:22.799 07:23:52 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:22.799 [2024-07-13 07:23:52.103971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:22.799 null0 00:36:22.799 [2024-07-13 07:23:52.136016] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:22.799 [2024-07-13 07:23:52.136471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.799 07:23:52 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:22.799 22597174 00:36:22.799 07:23:52 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:22.799 1053887597 00:36:22.799 07:23:52 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1700287 00:36:22.799 07:23:52 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1700287 /var/tmp/bperf.sock 00:36:22.799 07:23:52 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1700287 ']' 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:22.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:22.799 07:23:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:22.799 [2024-07-13 07:23:52.209552] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:36:22.799 [2024-07-13 07:23:52.209628] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1700287 ] 00:36:22.799 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.799 [2024-07-13 07:23:52.241191] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:23.057 [2024-07-13 07:23:52.269827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.057 [2024-07-13 07:23:52.358998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.057 07:23:52 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:23.057 07:23:52 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:23.057 07:23:52 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:23.057 07:23:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:23.314 07:23:52 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:23.314 07:23:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:23.573 07:23:52 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:23.573 07:23:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:23.830 [2024-07-13 07:23:53.227463] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:24.087 nvme0n1 00:36:24.087 07:23:53 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:24.087 07:23:53 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:24.087 07:23:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:24.087 07:23:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:24.087 07:23:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:24.087 07:23:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.344 07:23:53 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:24.344 07:23:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:24.344 07:23:53 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:24.344 07:23:53 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:24.344 07:23:53 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:24.344 07:23:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.344 07:23:53 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:24.600 07:23:53 keyring_linux -- keyring/linux.sh@25 -- # sn=22597174 00:36:24.600 07:23:53 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:24.600 07:23:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:24.600 07:23:53 keyring_linux -- keyring/linux.sh@26 -- # [[ 22597174 == \2\2\5\9\7\1\7\4 ]] 00:36:24.600 07:23:53 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 22597174 00:36:24.600 07:23:53 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:24.600 07:23:53 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:24.600 Running I/O for 1 seconds... 00:36:25.575 00:36:25.575 Latency(us) 00:36:25.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.575 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:25.575 nvme0n1 : 1.02 4755.62 18.58 0.00 0.00 26675.99 6140.97 33399.09 00:36:25.575 =================================================================================================================== 00:36:25.575 Total : 4755.62 18.58 0.00 0.00 26675.99 6140.97 33399.09 00:36:25.575 0 00:36:25.575 07:23:54 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:25.575 07:23:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:25.833 07:23:55 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:25.833 07:23:55 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:25.833 07:23:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:25.833 07:23:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:25.833 07:23:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:25.833 07:23:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.091 07:23:55 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:26.091 07:23:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:26.091 07:23:55 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:26.091 07:23:55 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:26.091 07:23:55 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:26.091 07:23:55 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:26.091 07:23:55 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:26.091 07:23:55 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:26.091 07:23:55 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:26.091 07:23:55 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:26.091 07:23:55 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:26.091 07:23:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:26.348 [2024-07-13 07:23:55.672211] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:26.348 [2024-07-13 07:23:55.672747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d81690 (107): Transport endpoint is not connected 00:36:26.348 [2024-07-13 07:23:55.673735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d81690 (9): Bad file descriptor 00:36:26.348 [2024-07-13 07:23:55.674733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:26.348 [2024-07-13 07:23:55.674765] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:26.348 [2024-07-13 07:23:55.674781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:26.348 request: 00:36:26.348 { 00:36:26.348 "name": "nvme0", 00:36:26.348 "trtype": "tcp", 00:36:26.348 "traddr": "127.0.0.1", 00:36:26.348 "adrfam": "ipv4", 00:36:26.348 "trsvcid": "4420", 00:36:26.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:26.348 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:26.348 "prchk_reftag": false, 00:36:26.348 "prchk_guard": false, 00:36:26.348 "hdgst": false, 00:36:26.348 "ddgst": false, 00:36:26.349 "psk": ":spdk-test:key1", 00:36:26.349 "method": "bdev_nvme_attach_controller", 00:36:26.349 "req_id": 1 00:36:26.349 } 00:36:26.349 Got JSON-RPC error response 00:36:26.349 response: 00:36:26.349 { 00:36:26.349 "code": -5, 00:36:26.349 "message": "Input/output error" 00:36:26.349 } 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@33 -- # sn=22597174 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 22597174 00:36:26.349 1 links removed 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@33 -- # sn=1053887597 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1053887597 00:36:26.349 1 links removed 00:36:26.349 07:23:55 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1700287 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1700287 ']' 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1700287 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1700287 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1700287' 00:36:26.349 killing process with pid 1700287 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@967 -- # kill 1700287 00:36:26.349 Received shutdown signal, test time was about 1.000000 seconds 00:36:26.349 00:36:26.349 Latency(us) 00:36:26.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.349 =================================================================================================================== 00:36:26.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:26.349 07:23:55 keyring_linux -- common/autotest_common.sh@972 -- # wait 1700287 00:36:26.607 07:23:55 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1700152 00:36:26.607 07:23:55 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1700152 ']' 00:36:26.607 07:23:55 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1700152 00:36:26.607 07:23:55 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:26.607 07:23:55 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:26.607 07:23:55 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1700152 00:36:26.607 07:23:55 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:26.607 07:23:55 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:26.607 07:23:55 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1700152' 00:36:26.607 killing process with pid 1700152 00:36:26.607 07:23:55 keyring_linux -- common/autotest_common.sh@967 -- # kill 1700152 00:36:26.607 07:23:55 keyring_linux -- common/autotest_common.sh@972 -- # wait 1700152 00:36:27.172 00:36:27.172 real 0m4.920s 00:36:27.172 user 0m9.180s 00:36:27.172 sys 0m1.559s 00:36:27.172 07:23:56 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:27.172 07:23:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:27.172 ************************************ 00:36:27.172 END TEST keyring_linux 00:36:27.172 ************************************ 00:36:27.172 07:23:56 -- common/autotest_common.sh@1142 -- # return 0 00:36:27.172 07:23:56 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:27.172 07:23:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:27.172 07:23:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:27.172 07:23:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:27.172 07:23:56 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:27.172 07:23:56 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:27.172 07:23:56 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:27.172 07:23:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:27.172 07:23:56 -- common/autotest_common.sh@10 -- # set +x 00:36:27.172 07:23:56 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:27.172 07:23:56 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:27.172 07:23:56 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:27.172 07:23:56 -- common/autotest_common.sh@10 -- # set +x 00:36:29.071 INFO: APP EXITING 00:36:29.071 INFO: killing all VMs 00:36:29.071 INFO: killing vhost app 00:36:29.071 INFO: EXIT DONE 00:36:30.006 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:30.006 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:30.006 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:30.006 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:30.006 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:30.006 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:30.006 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:30.006 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:30.006 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:30.006 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:30.006 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:30.006 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:30.264 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:30.264 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:30.264 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:30.264 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:30.264 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:31.640 Cleaning 00:36:31.640 Removing: /var/run/dpdk/spdk0/config 00:36:31.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:31.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:31.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:31.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:31.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:31.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:31.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:31.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:31.640 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:31.640 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:31.640 Removing: /var/run/dpdk/spdk1/config 00:36:31.640 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:31.640 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:31.640 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:31.640 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:31.640 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:31.640 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:31.640 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:31.640 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:31.640 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:31.640 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:31.640 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:31.640 Removing: /var/run/dpdk/spdk2/config 00:36:31.640 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:31.640 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:31.640 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:31.640 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:31.640 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:31.640 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:31.640 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:31.640 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:31.640 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:31.640 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:31.640 Removing: /var/run/dpdk/spdk3/config 00:36:31.640 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:31.640 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:31.640 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:31.640 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:31.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:31.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:31.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:31.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:31.641 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:31.641 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:31.641 Removing: /var/run/dpdk/spdk4/config 00:36:31.641 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:31.641 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:31.641 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:31.641 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:31.641 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:31.641 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:31.641 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:31.641 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:31.641 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:31.641 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:31.641 Removing: /dev/shm/bdev_svc_trace.1 00:36:31.641 Removing: /dev/shm/nvmf_trace.0 00:36:31.641 Removing: /dev/shm/spdk_tgt_trace.pid1380512 00:36:31.641 Removing: /var/run/dpdk/spdk0 00:36:31.641 Removing: /var/run/dpdk/spdk1 00:36:31.641 Removing: /var/run/dpdk/spdk2 00:36:31.641 Removing: /var/run/dpdk/spdk3 00:36:31.641 Removing: /var/run/dpdk/spdk4 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1378963 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1379699 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1380512 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1380947 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1381638 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1381780 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1382494 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1382511 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1382747 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1384011 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1384983 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1385174 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1385473 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1385673 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1385861 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1386018 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1386185 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1386367 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1386671 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1389022 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1389190 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1389351 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1389356 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1389780 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1389794 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1390225 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1390231 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1390519 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1390529 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1390707 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1390723 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1391205 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1391364 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1391557 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1391725 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1391749 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1391937 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1392090 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1392326 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1392524 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1392682 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1392835 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1393112 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1393271 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1393428 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1393590 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1393854 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1394018 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1394169 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1394412 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1394640 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1394874 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1395027 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1395296 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1395463 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1395868 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1396383 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1396471 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1396679 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1398717 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1451523 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1454103 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1461540 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1464709 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1467048 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1467574 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1471539 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1475256 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1475258 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1475914 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1476508 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1477108 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1477515 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1477633 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1477769 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1477905 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1477907 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1478566 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1479105 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1479755 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1480162 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1480166 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1480422 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1481196 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1482025 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1487961 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1488145 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1490645 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1494350 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1496512 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1502763 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1507951 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1509145 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1509809 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1519990 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1522195 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1547871 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1550861 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1552036 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1553352 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1553403 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1553503 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1553640 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1554073 00:36:31.641 Removing: /var/run/dpdk/spdk_pid1555265 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1555985 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1556294 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1557937 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1558328 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1558886 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1561276 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1564528 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1568055 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1591567 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1594321 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1598084 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1599029 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1600122 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1602655 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1604891 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1609513 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1609566 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1612473 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1612604 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1612740 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1613125 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1613134 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1614201 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1615380 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1616558 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1617758 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1618933 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1620111 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1623910 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1624250 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1625647 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1626380 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1629967 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1631935 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1635231 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1639155 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1645402 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1649835 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1649837 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1662034 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1662445 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1662943 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1663376 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1663951 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1664358 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1664771 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1665290 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1667668 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1667926 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1671714 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1671873 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1673981 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1679004 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1679017 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1681802 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1683185 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1684586 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1685446 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1686839 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1687603 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1692930 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1693254 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1693645 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1695201 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1695599 00:36:31.900 Removing: /var/run/dpdk/spdk_pid1695876 00:36:31.901 Removing: /var/run/dpdk/spdk_pid1698330 00:36:31.901 Removing: /var/run/dpdk/spdk_pid1698340 00:36:31.901 Removing: /var/run/dpdk/spdk_pid1699794 00:36:31.901 Removing: /var/run/dpdk/spdk_pid1700152 00:36:31.901 Removing: /var/run/dpdk/spdk_pid1700287 00:36:31.901 Clean 00:36:31.901 07:24:01 -- common/autotest_common.sh@1451 -- # return 0 00:36:31.901 07:24:01 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:31.901 07:24:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:31.901 07:24:01 -- common/autotest_common.sh@10 -- # set +x 00:36:31.901 07:24:01 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:31.901 07:24:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:31.901 07:24:01 -- common/autotest_common.sh@10 -- # set +x 00:36:31.901 07:24:01 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:31.901 07:24:01 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:31.901 07:24:01 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:31.901 07:24:01 -- spdk/autotest.sh@391 -- # hash lcov 00:36:31.901 07:24:01 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:31.901 07:24:01 -- spdk/autotest.sh@393 -- # hostname 00:36:31.901 07:24:01 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:32.159 geninfo: WARNING: invalid characters removed from testname! 00:37:04.218 07:24:29 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:04.476 07:24:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:07.789 07:24:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:10.317 07:24:39 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:13.593 07:24:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:16.122 07:24:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:19.406 07:24:48 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:19.406 07:24:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:19.406 07:24:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:19.406 07:24:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:19.406 07:24:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:19.406 07:24:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.406 07:24:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.406 07:24:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.406 07:24:48 -- paths/export.sh@5 -- $ export PATH 00:37:19.406 07:24:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.406 07:24:48 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:19.406 07:24:48 -- common/autobuild_common.sh@444 -- $ date +%s 00:37:19.406 07:24:48 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720848288.XXXXXX 00:37:19.406 07:24:48 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720848288.MbatPQ 00:37:19.406 07:24:48 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:37:19.406 07:24:48 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:37:19.406 07:24:48 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:19.406 07:24:48 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:19.406 07:24:48 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:19.406 07:24:48 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:19.406 07:24:48 -- common/autobuild_common.sh@460 -- $ get_config_params 00:37:19.406 07:24:48 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:37:19.406 07:24:48 -- common/autotest_common.sh@10 -- $ set +x 00:37:19.406 07:24:48 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:19.406 07:24:48 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:37:19.406 07:24:48 -- pm/common@17 -- $ local monitor 00:37:19.406 07:24:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:19.406 07:24:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:19.406 07:24:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:19.406 07:24:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:19.406 07:24:48 -- pm/common@21 -- $ date +%s 00:37:19.406 07:24:48 -- pm/common@25 -- $ sleep 1 00:37:19.406 07:24:48 -- pm/common@21 -- $ date +%s 00:37:19.406 07:24:48 -- pm/common@21 -- $ date +%s 00:37:19.406 07:24:48 -- pm/common@21 -- $ date +%s 00:37:19.406 07:24:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720848288 00:37:19.406 07:24:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720848288 00:37:19.406 07:24:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720848288 00:37:19.406 07:24:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720848288 00:37:19.406 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720848288_collect-vmstat.pm.log 00:37:19.406 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720848288_collect-cpu-load.pm.log 00:37:19.406 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720848288_collect-cpu-temp.pm.log 00:37:19.406 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720848288_collect-bmc-pm.bmc.pm.log 00:37:20.343 07:24:49 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:37:20.343 07:24:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:20.343 07:24:49 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:20.343 07:24:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:20.343 07:24:49 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:20.343 07:24:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:20.343 07:24:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:20.343 07:24:49 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:20.343 07:24:49 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:20.343 07:24:49 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:20.343 07:24:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:20.343 07:24:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:20.343 07:24:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:20.343 07:24:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:20.343 07:24:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.343 07:24:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:20.343 07:24:49 -- pm/common@44 -- $ pid=1712157 00:37:20.343 07:24:49 -- pm/common@50 -- $ kill -TERM 1712157 00:37:20.343 07:24:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.343 07:24:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:20.343 07:24:49 -- pm/common@44 -- $ pid=1712159 00:37:20.343 07:24:49 -- pm/common@50 -- $ kill -TERM 1712159 00:37:20.343 07:24:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.343 07:24:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:20.343 07:24:49 -- pm/common@44 -- $ pid=1712161 00:37:20.343 07:24:49 -- pm/common@50 -- $ kill -TERM 1712161 00:37:20.343 07:24:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.343 07:24:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:20.343 07:24:49 -- pm/common@44 -- $ pid=1712189 00:37:20.343 07:24:49 -- pm/common@50 -- $ sudo -E kill -TERM 1712189 00:37:20.343 + [[ -n 1278948 ]] 00:37:20.343 + sudo kill 1278948 00:37:20.353 [Pipeline] } 00:37:20.369 [Pipeline] // stage 00:37:20.373 [Pipeline] } 00:37:20.389 [Pipeline] // timeout 00:37:20.393 [Pipeline] } 00:37:20.408 [Pipeline] // catchError 00:37:20.412 [Pipeline] } 00:37:20.427 [Pipeline] // wrap 00:37:20.431 [Pipeline] } 00:37:20.445 [Pipeline] // catchError 00:37:20.452 [Pipeline] stage 00:37:20.454 [Pipeline] { (Epilogue) 00:37:20.467 [Pipeline] catchError 00:37:20.469 [Pipeline] { 00:37:20.485 [Pipeline] echo 00:37:20.486 Cleanup processes 00:37:20.492 [Pipeline] sh 00:37:20.772 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:20.773 1712304 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:20.773 1712423 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:20.785 [Pipeline] sh 00:37:21.065 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:21.065 ++ grep -v 'sudo pgrep' 00:37:21.065 ++ awk '{print $1}' 00:37:21.065 + sudo kill -9 1712304 00:37:21.077 [Pipeline] sh 00:37:21.412 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:31.393 [Pipeline] sh 00:37:31.670 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:31.670 Artifacts sizes are good 00:37:31.684 [Pipeline] archiveArtifacts 00:37:31.691 Archiving artifacts 00:37:31.890 [Pipeline] sh 00:37:32.174 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:32.192 [Pipeline] cleanWs 00:37:32.204 [WS-CLEANUP] Deleting project workspace... 00:37:32.204 [WS-CLEANUP] Deferred wipeout is used... 00:37:32.211 [WS-CLEANUP] done 00:37:32.213 [Pipeline] } 00:37:32.232 [Pipeline] // catchError 00:37:32.244 [Pipeline] sh 00:37:32.518 + logger -p user.info -t JENKINS-CI 00:37:32.534 [Pipeline] } 00:37:32.553 [Pipeline] // stage 00:37:32.557 [Pipeline] } 00:37:32.568 [Pipeline] // node 00:37:32.571 [Pipeline] End of Pipeline 00:37:32.591 Finished: SUCCESS